El TecnoBaúl de Kiquenet

Kiquenet boring stories

Archive for the ‘.NET’ Category

MS Search Service, Indexing Service and SQL Server: search engine for ASP.NET website

Posted by kiquenet en 21 junio 2016

Quick and Dirty method to Combine MS Search Service, Indexing Service and SQL Server to provide a unified search engine for your ASP.NET website

WebSite for a company and search engine is required.

They had the contents in .aspx pages, not a problem, but they also had forums whose contents were collated in a database table, in a column to be precise. They wanted me to display the results from these two sources through a common search engine. Since I had little time to write a search engine of my own, I put the power of MS Search Service, Indexing Service and SQL Server together to do the task for me. There is a lot of scope for enhancement but here is how you can implement a very basic yet powerful search engine of your own.

STEP I: Create a Web Catalog in Indexing Service

By default, the indexing service has two catalogs, one for the file system (System) and one for the default web site (Web).
If Web catalog is not present, you can easily create one.

1. Open Control Panel –> Administrative Tools –> Computer Management
2. Scroll down to Computer Management–> Services and Applications –> Indexing Service in the MMC
3. Right click Indexing Service and choose New –> Catalog
4. In the name field give Web and choose C:\Inetpub as the location
5. Right click the newly created catalog, choose Properties. Click on the Tracking tab of the properties window. Select "Default Web Site" as the WWW server.
6. Restart Indexing Service
7. Go to Computer Management –> Services and Applications –> Services and configure Indexing Service as Automatic if it is Manual or Disabled.

STEP II: Optimize ASPX and ASCX files for full-text search

By default, the *.aspx and *.ascx file types are treated as text files which are not optimized for searching by the Indexing Service. To optimize searching for these two file types copy the following into a new .reg file and run it in your computer. The customary warning: Editing registry incorrectly may prohibit your computer to run properly. Edit the registry at your own risk. I may not be held responsible for the damage you do to your computer by incorrectly following the steps below.


You must have Index files with Unknown Extensions enabled. To enable this, right click on Indexing Service, choose Properties and click on Generation tab on the window. Check Index files with Unknown Extensions checkbox. Restart the computer, stop Indexing Service, delete all the contents of the catalog.wci folder (not the folder itself) corresponding to your catalog (in this case C:\Inetpub\catalog.wci), start the Indexing Service and allow it to rebuild the catalog.

STEP III: Using Full-Text Searches directly in ASP.NET Applications

This is not actually a step but a side step where you can take a pause for a moment and test whether your newly created catalog is returning some results. If you don’t have a database to worry about, then this might be your last step unless you want to link the Indexing Service with SQL Server.

Indexing service exposes itself via the OLEDB provider MSIDXS. You can take the full advantage of the server in your ASP.NET application via ADO.NET. If  you have a TextBox (TextBox1), a Button (Button1) and a DataGrid (DataGrid1) on your web form and the Web catalog in place, this might as well be the content of your button click handler:

using System.Data;
using System.Data.OleDb;
private void Button1_Click(object sender, EventArgs e)
      string strCatalog = "Web";
      string strQuery = "Select Filename, Rank, VPath from SCOPE() where FREETEXT(‘" + TextBox1.Text + "’)";
      string connString = "Provider=MSIDXS.1;Integrated Security .=”;Data Source=’" + strCatalog + "’";
      OleDbConnection Connection = new OleDbConnection(connString);
      OleDbDataAdapter da = new OleDbDataAdapter(strQuery, Connection);
      DataSet ds = new DataSet();
      DataView source= new DataView(ds.Tables[0]);
      DataGrid1.DataSource = source;

STEP IV: Link Indexing Service with SQL Server

The next step is to link the Indexing Service with your SQL Server. Open Query Analyzer or your favourite SQL script editor. Run the following script.

EXEC sp_addlinkedserver FTIndexWeb, ‘Index Server’, ‘MSIDXS’, ‘Web’

where FTIndexWeb is the chosen linked server name, and Web is the catalog name you created in STEP I.

STEP V: Querying Indexing Service via SQL Server

Let’s modify the previous query and run it in SQL server. Run the following query in Query Analyzer.

SELECT Q.FileName, Q.Rank, Q.VPath
FROM OpenQuery(
‘Select Filename, Rank, VPath
from SCOPE()
where FREETEXT(”Calcutta”)
              ) AS Q

Replace FTIndexWeb with whatever linked server name you chose in step IV and Calcutta with your search keyword(s).

STEP VI: Enabling a table/column in SQL Server for full-text searches

Open Enterprise Manager.

Browse to Console Root–>; Microsoft SQL Servers –> Databases –> Tables. Check two things before you proceed.

1. The table where you want full-text searching enabled, must have some unique constraint. If a primary key or a unique constraint is not present, create an "ID" column and apply a unique constraint.
2. Microsoft Search Service (mssearch.exe) must have been enabled and running in your computer. If not, browse to Computer Management –> Services and Applications–> Services in Computer Management MMC and configure Microsoft Search Service as Automatic and start the service.

On the Enterprise Manager MMC, right click on your table and choose Full-Text Index Table –> Define Full-Text Indexing on a table. If the option is grayed out, check #2 above.

Click Next on the popped up wizard .
Choose the unique index and click Next.
Choose the columns where you want indexing enabled. Click Next.
Give the catalog a name and specify a physical location to store the catalog. Click Next.

If you want the control over how and when the catalog is filled (full or incremental) click on New Catalog Schedule.
After configuring it, come back to Full-Text Indexing Wizard and click Next. Click Finish.

The wizard takes a minute or two to setup the catalog.

STEP VII: Querying Full-Text Catalog in SQL Server

Let’s test the newly created catalog in SQL Server. Run the following query.

FROM forums_topics AS FT_TBL,
CONTAINSTABLE ( forums_topics
                ,   message
                , ‘"Calcutta"’ )

Forums_Topics is the table name and Message is the column name on which full-text catalog is built. Replace Calcutta with your search keyword(s).

STEP VIII: Combining the results

The steps to combine the results would be to

1. Create a temporary table
2. Insert the results of the first query
3. Insert the results of the second query
4. Query the temp table
5. Drop the temp table

We need a stored procedure for this and here it is:

CREATE PROCEDURE sp_Accounts_SearchSite
@FreeText varchar (255)
CREATE TABLE #tempresults(
FileNames varchar (255),
Rank int,
VPath varchar(255))
DECLARE @sql nvarchar(1000)
SET @sql = N’INSERT INTO #tempresults(FileNames, Rank, VPath) ‘ + CHAR(13) +
N’SELECT Q.FileName As FileNames, Q.Rank As Rank, Q.VPath As VPath ‘ + CHAR(13) +
N’FROM OpenQuery(FTIndexWeb, ”Select Filename, Rank, VPath from SCOPE() where FREETEXT(””’ + @FreeText + ””’)” ) AS Q’
EXECUTE sp_executesql @sql
SET @SQL = N’INSERT INTO #tempresults(FileNames, Rank, VPath) ‘ + CHAR(13) +
N’SELECT FT_TBL.subject As FileNames, KEY_TBL.RANK As Rank, FT_TBL.topicid As VPath ‘ + CHAR(13) +
N’FROM forums_topics AS FT_TBL, ‘ + CHAR(13) +
N’CONTAINSTABLE ( forums_topics ‘ + CHAR(13) +
N’, message’ + CHAR(13) +
N’, ”"’ + @FreeText + ‘"” ) ‘ + CHAR(13) +
N’AS KEY_TBL’ + CHAR(13) +
N’WHERE FT_TBL.topicid = KEY_TBL.[KEY] ‘
EXECUTE sp_executesql @sql
SELECT FileNames, Rank, VPath from #tempresults ORDER BY Rank DESC
DROP TABLE #tempresults

STEP IX: Modify your .NET Application

The rest is a piece of cake. Your Button click handler should now look like this:

using System.Data;
using System.Data.SqlClient; // Bye Bye OleDb
private void Button1_Click(object sender, EventArgs e)
      string connString = @"server=****;database=****;uid=****;pwd=****;";
      string storedProcName = "sp_Accounts_SearchSite";
      SqlConnection Connection = new SqlConnection(connString);
      SqlCommand command = new SqlCommand( storedProcName, Connection );
      command.CommandType = CommandType.StoredProcedure;
      command.Parameters.Add("@FreeText", TextBox1.Text);
      SqlDataAdapter sqlDA = new SqlDataAdapter();
      sqlDA.SelectCommand = command;
      DataSet dataSet = new DataSet();
      sqlDA.Fill( dataSet, "mySearchResults" );
      DataView source = new DataView(dataSet.Tables[0]);
      DataGrid1.DataSource = source;

The grid will show results from your file system as well as from your database tables. With everything indexed, the result is lightening fast for hundreds of results if not millions.

Many of you might think that there remains a lot to be told. But didn’t I say it was quick and dirty? No pun intended.

To learn more about how to compose your own queries for full-text searches, visit the MSDN website at http://msdn.microsoft.com.

With little logic of your own, you can have a nice search engine which would query different sources differently based on your own requirements.

For example, you can redefine the scope (Deep Copy Traversal, Swallow Copy Traversal ring a bell?) and can do regular expression searches. You are the one to set your own limit.

By Ram Dash.

Reference: http://www.gisremotesensing.com/2011/05/unified-search-engine.html

Posted in .NET, ASP.NET, SQLServer | Etiquetado: , , , | Leave a Comment »

Indexing Server and Sql Server

Posted by kiquenet en 21 junio 2016


EXEC sp_addlinkedserver Web,’Index Server’,’MSIDXS’,’CarenetWeb’

where CarenetWeb is a catalog on the local server.

You can’t build a linked server to a remote IS catalog, but you can query it like this

Select * from openquery(LinkedServerName,’Select DocTitle, Vpath, Characterization from RemoteServerName.RemoteCatalogName..Scope()
where FileName like ”%.txt”’)

Using Query analyzer connected to the Sql server I executed a sp_addlinked server:

EXEC sp_addlinkedserver FileSystem, ‘Index Server’, ‘MSIDXS’, ‘CatalogName’

In the Sql Management console, under Security –> Linked Servers –>Properties, I set the Security tab option to "Be made using this security context:" and assigned a domain account that is a member of the local Administrators group on the remote server where the queried catalog resides.

Then constructed a query like this:

SELECT DocTitle, vpath, Filename, Size, Contents, characterization, PATH, URL, A_Href, rank, write
FROM RemoteServer.CatalogName..SCOPE(”" + this.Scope + "”)
Where FREETEXT(Contents,”" SearchPhrase"”)

Which I was able to then execute while connected with a sql account that was a db_owner.

The Sql Server Agent runs under a domain account, which is a member of the remote server’s Administrators group.

1) the IS server is running locally.
2) the catalog is indexing directories, expand the directories folder and verify there are directories there
3) the directories indexed are both real and virtual (the virtual directories have blue globes on them),
4) there are documents in the physical directories.

Also can you try this code agains your remote catalog – make sure you change this entry to reflect your actual remote server name and catalog

@objIxsso, ‘catalog’, ‘query://remoteservername.remotecatalogname’

create procedure sp_indexServer @input char(255)
declare @HResult int
declare @objIxsso int
declare @RecordSet int
declare @recordCount int
declare @Count int
declare @fields int
declare @path varchar(100)
declare @vpath varchar(100)
declare @doctitle varchar(100)
declare @size varchar(100)
declare @filename varchar(100)
declare @write varchar(100)
declare @rank varchar(100)
declare @hitcount varchar(100)

EXEC @HResult = sp_OACreate ‘Ixsso.query’, @objIxsso Out
EXEC @HResult = sp_OASetProperty @objIxsso, ‘Query’, @input
EXEC @HResult = sp_OASetProperty
@objIxsso, ‘catalog’, ‘query://remoteservername.remotecatalogname’
EXEC @HResult = sp_OASetProperty @objIxsso, ‘Columns’, _
‘path, vpath, doctitle,size,filename,write,rank,hitcount’
EXEC @HResult = sp_OASetProperty
@objIxsso, ‘MaxRecords’, ‘200’
EXEC @HResult = sp_OASetProperty @objIxsso, ‘SortBy’, ‘rank
IF @HResult <> 0
EXEC sp_OAGetErrorInfo @objIxsso
exec @HResult =sp_OAMethod @objIxsso,_
‘CreateRecordSet("nonsequential")’,@recordSet OUTPUT
IF @HResult <> 0
EXEC sp_OAGetErrorInfo @objIxsso
EXEC @HResult = sp_OAGetProperty
@recordSet, ‘RecordCount’,_
@RecordCount output
IF @HResult <> 0
EXEC sp_OAGetErrorInfo @recordset
if @recordcount =0
print ‘No matches found’
print convert (varchar(5),@recordcount) +’ matches found’
SET @Count = 1
WHILE ( @Count <= @RecordCount )
EXEC @HResult = sp_OAGetProperty @RecordSet, ‘fields’, _
@fields out,0
EXEC @HResult = sp_OAGetProperty @fields, ‘Value’, _
@path output
EXEC @HResult = sp_OAGetProperty @RecordSet, ‘fields’, _
@fields out,1
EXEC @HResult = sp_OAGetProperty @fields, ‘Value’, _
@vpath output
EXEC @HResult = sp_OAGetProperty @RecordSet, ‘fields’, _
@fields out,2
EXEC @HResult = sp_OAGetProperty @fields, ‘Value’, _
@doctitle output
EXEC @HResult = sp_OAGetProperty @RecordSet, ‘fields’, _
@fields out,3
EXEC @HResult = sp_OAGetProperty @fields, ‘Value’, _
@size output
EXEC @HResult = sp_OAGetProperty @RecordSet, ‘fields’, _
@fields out,4
EXEC @HResult = sp_OAGetProperty @fields, ‘Value’, _
@filename output
EXEC @HResult = sp_OAGetProperty @RecordSet, ‘fields’,
@fields out,5
EXEC @HResult = sp_OAGetProperty @fields, ‘Value’, @write
EXEC @HResult = sp_OAGetProperty @RecordSet, ‘fields’,
@fields out,6
EXEC @HResult = sp_OAGetProperty @fields, ‘Value’, @rank
EXEC @HResult = sp_OAGetProperty @RecordSet, ‘fields’,
@fields out,7
EXEC @HResult = sp_OAGetProperty @fields, ‘Value’,
@hitcount output
print @path +’, ‘ +@vpath+’, ‘+@doctitle+’, ‘+@size +’, ‘+_
@filename +’, ‘+@write+’, ‘+@rank+’, ‘+@hitcount
EXEC @HResult =sp_OAMethod @RecordSet, ‘MoveNext’
select @count=@Count+1
print @count
EXEC @HResult = sp_OADestroy @fields
IF @HResult <> 0
EXEC sp_OAGetErrorInfo @fields
EXEC @HResult = sp_OADestroy @recordset
IF @HResult <> 0
EXEC sp_OAGetErrorInfo @recordset
EXEC @HResult = sp_OADestroy @objIxsso
IF @HResult <> 0
EXEC sp_OAGetErrorInfo @objIxsso

Posted in .NET, SQLServer | Etiquetado: , , | Leave a Comment »

Performance DataAccess Oracle and ODP.NET

Posted by kiquenet en 18 junio 2016

FetchSize (OracleDataReader)

A number of blogs and sites mention increasing FetchSize of OracleDataReader to improve performance when fetching big volumes of data (e.g. thousands of rows). There are some documented experiments with exact numbers on this like:


Table had a little more than 155K rows and was size of 31 MB. Yet, it took more than 5 minutes to complete for data adapter to fill data table.

The cause of the problem was the number of round trips client need to accomplish to get all the rows from database. If you can reduce the number of round trips, you can increase the fetch size so that in each turn command object will fetch more rows from database. Here is how it’s impletemented:

using (OracleConnection conn = new OracleConnection())
     OracleCommand comm = new OracleCommand();
     comm.Connection = conn;
     comm.FetchSize = comm.FetchSize * 8;
     comm.CommandText = "select * from some_table";

          OracleDataAdapter adap = new OracleDataAdapter(comm);
          System.Data.DataTable dt = new System.Data.DataTable();

Notice the line with blue font, fetch size of command object increased by 8 times its default which is 128 KB at ODP.Net OracleDataReader also has the FetchSize property. By increasing fetch size, you increase the cache size in memory to fetch rows.

What we gained is up to 96% performance improvement. Here are some timings with different fetch sizes:

Fetch Size             Timing (MI:SS.FF3)
Default (128 KB)       05:20.290
Default x 8 (1 MB)     00:52.941
Default x 32 (4 MB)    00:26.008
Default x 64 (8 MB)    00:12.409

Reference: http://metekarar.blogspot.com.es/2013/04/performance-improvement-for-odpnet.html

Strangely, unless the connection pooling is explicitly disabled (e.g. in the connection string), the increase/decrease of FetchSize stops having any effect. When the pooling is disabled though, it’s clear that the FetchSize can improve the performance (the more records, the bigger the effect).

It turns out that this unexpected behavior is limited to the following conditions: 1) The SELECT statement is exactly the same 2) The pooling is ON 3) The self-tuning is ON

Only in those conditions the first time the FetchSize is set, it gets somehow cached by ODP.NET and attempts to change it don’t work.

References Mark Williams

string with_pool =
  "User Id=hr;
   Data Source=oramag;

Connection Pooling Overview

Making specific connection pooling parameter recommendations is difficult, because workloads can vary greatly. One general recommendation is to have a sufficiently high minimum pool size. I have often seen connection pools drained to the minimum during a lull in activity. When activity picks up rapidly, the connection pool has to create connections quickly rather than use connections in the pool. In most cases, the application server reduces the number of connections and remains relatively idle. Thus, there is no reason to have a low minimum pool size.

Controlling Fetch Size

Retrieving data from the database tier to the middle tier (or client tier) is one of the most expensive operations with respect to performance. If the end user consumes a lot of data, you will want your application to minimize the number of round-trip data fetches.

By default, ODP.NET will read 64KB of data from a result set at a time. You change this value by setting the FetchSize attribute for an OracleDataReader object. However, rather than arbitrarily setting the fetch size, ODP.NET provides the ability to control the number of rows that are fetched from a result set per round trip. The two properties you use to do this are RowSize and FetchSize.

RowSize is a property of the OracleCommand object, and the value for this property is assigned by ODP.NET when the statement associated with the command object is executed. You explicitly set the value of the FetchSize property. For example, if you want to retrieve 100 rows at a time from the result set per round trip, you set the FetchSize property:

dr.FetchSize = cmd.RowSize * 100;

if you are tempted to fetch only a single row at a time to "save resources and increase performance" (I have sometimes heard this advice), you will actually be dramatically reducing, rather than increasing, runtime performance. In addition, for this set of data, there is not much performance increase when fetching more than 1,000 rows at a time. Of course, this is also a narrow data set (with only two small columns). Apply this technique to a result set representative of your environment to determine optimal values for setting your fetch size.

Of course, there is a cost if the fetch size is arbitrarily large. More client-side memory and processor cycles will be needed to store and manage a larger amount of data. The goal is to find a high-performing balance between the number of round trips and the amount of data retrieved per trip.

Statement Caching

You enable the statement caching feature by setting the Statement Cache Size connection string parameter. The default value of this parameter is 0, meaning that statement caching is disabled by default. You set this parameter to the number of statements you plan to keep cached during your application’s lifetime. For example, if you plan to cache 16 statements, you will set this value to 16. ODP.NET will then cache the 16 most recently used statements. The 17th most recently used statement is aged out of the cache by the last 16 unique statements.

With statement caching enabled, ODP.NET will automatically cache any statement you execute. If you have enabled statement caching and you do not want to place a statement in the cache, set the AddToStatementCache property on OracleCommand to false:

string constr =
  "User Id=hr;
   Data Source=oramag;
   Statement Cache Size=1";


create or replace procedure get_sales (p_refcur out sys_refcursor) is


  — open the cursor using the passed in ref cursor
— sys_refcursor is a built in type

  open p_refcur for   select  *   from    sales;


 // connection string — be sure to adjust for your environment
public const string constr = "User Id=sh; Password=sh; Data Source=otndemo; Pooling=false; Enlist=false";

      // enable extended sql tracing
      // this can be used to verify the number of rows fetched
      // uncomment to create trace file in user_dump_dest directory
      // EnableExtendedTrace(con);

static void EnableExtendedTrace(OracleConnection con)
    // create, setup, and execute command to enable extended sql trace
    OracleCommand cmd = con.CreateCommand();

    cmd.CommandText = "alter session set events ‘10046 trace name context forever, level 4’";

    // clean up

      // do the fetch test
      // passing 0 for "rows" will use default fetch size of 64k
      FetchTest(con, 100);
      FetchTest(con, 1000);
      FetchTest(con, 10000);
      FetchTest(con, 0);

static void FetchTest(OracleConnection con, int rows)


      // used to track execution duration
      DateTime timeStart;
      DateTime timeEnd;
      double totalSeconds;

      // create and setup command to call stored procedure
      OracleCommand cmd = con.CreateCommand();
      cmd.CommandText = "get_sales";
      cmd.CommandType = CommandType.StoredProcedure;

      // create the ref cursor parameter and add to command parameter collection
      OracleParameter prm = new OracleParameter("p_refcur", OracleDbType.RefCursor, ParameterDirection.Output);

      // get the data reader
      OracleDataReader dr = cmd.ExecuteReader();

      // There are 3 steps in getting the m_rowSize property value…
      // Step 1 – get the data reader type
      Type type = dr.GetType();

      // Step 2 – get the "m_rowSize" field info
      FieldInfo fi = type.GetField("m_rowSize", BindingFlags.Instance | BindingFlags.NonPublic);

      // Step 3 – get the value of m_rowSize for the dr object
      int rowSize = (int)fi.GetValue(dr);

      // if rows is greater than 0 use to set fetch size,
      // otherwise use default size (64k)

      if (rows > 0)
        // set the fetch size on the dr object using the row size
        dr.FetchSize = rowSize * rows;
      // capture test start time
      timeStart = DateTime.Now;

      // simply loop forcing the entire result set to be fetched
      while (dr.Read())      {      }

      // capture test end time
      timeEnd = DateTime.Now;

      // calculate total seconds for this test
      totalSeconds = timeEnd.Subtract(timeStart).TotalSeconds;

      // display time used for test
      Console.WriteLine("Number of rows = {0,5}: {1,8:###.0000} total seconds, fetch size = {2,9:###,###,###} bytes.", rows, totalSeconds, dr.FetchSize);

      // clean up



From Alex Keh (Oracle)
This video describes ODP.NET best practices for performance:

The slides are here:

The code samples for many of the performance patterns are available in the following tutorials:

Oracle SQL High Performance tuning
Oracle Performance Survival Guide
Troubleshooting Oracle Performance


Bind variables

Array Select

Array insert



Connection Pool (CP)

    Ensure you have enough connections in CP – better to have many than too few.
    OS-authenticated CP available with ODP.NET 11g
    Keep a steady state of CP – never destroy or create large number of connections
    Close/Dispose connections explicitly – don’t rely on garbage collector
    You can monitor CP performance counters using ODP.NET and higher

Bind Variables

    Always use Bind Variables. Using Bind Variables will prevent reparsing of frequently executed statements. Literal value changes in commands force a reparse. Reparsing is fairly expensive CPU operation and requires shared pool locks.

Statement Caching

    Using statement caching retains parsed statement in shared pool.
    Cursor stays open on client side and Metadata remains on client
    Best used/works with Bind Variables
    Caching works with and caches the last 10  executed statements.
    Developer can choose which statement to cache.

Statement Cache Size=0 (no caching)

Statement Cache Size=1 (caching)

With ODP.NET cache size dynamically changes at runtime and provides automatic optimization/self-tuning. Self-tuning is enabled by default and no code changes are required.


Data Retrieval

You can control how much data is retrieved from the database per round-trip. Too much data can cause excessive client-side memory used and too little may cause additional round-trips. You can use OracleCommand.RowSize and OracleDataReader.FetchSize to control the result. FetchSize can be set as multiple of RowSize and RowSize can be dynamicall populated after statement execution.

FetchSize = RowSize X 1
FetchSize = RowSize X 100

Mass Data Movement with Arrays

    PL/SQL associative arrays can be used to pass large amounts of data between .NET and DB of the same data type.
    If you are executing the same statement multiple times, you could use a parameter array binding.

Statement Batching

    You can execute multiple commands in one DB round-trip using OracleDataAdapter.UpdateBatchSize.
    You can use anonymous PL/SQL for disparate or similar statements

ODP.NET DataTypes

    Try and avoid unnecessary datatype conversions between .NET and the DB
    Use OracleParameter.Dbtype

    public void CreateOracleDbParameter()
        OracleParameter parameter = new OracleParameter();
        parameter.ParameterName = "pDName";
        parameter.DbType = DbType.String;
        parameter.Value = "ENGINEERING";
        parameter.SourceColumn = "DName";

Use REF Cursors

With REF Cursors, you can retrieve data as needed and control data retrieved via FetchSize. You can fill a DataSet with just a portion of the REF cursor result. You can pass REF cursors back as input stored procedure parameters. Use OracleRefCursor class.

Oracle Performance Tuning in Visual Studio

    You can tune ad-hoc SQLs in query window

    Tune bad SQL using Oracle Performance Analyzer – requires:

        SYSDBA privilege

        database license for Oracle Diagnostic Pack

        database license for Oracle Tuning Pack


These are built into Oracle Database 10g and are invaluable for diagnosing Performance issues.

AWR – Automatic Workload Repository

    Evolution of statspack

    builtin repository

    captures performance stats at regular intervals

ADDM – Automatic Database Diagnostic Monitor

    Analyses AWR stats and generates recommendations

AWR and ADDM nodes are now available in Visual Studio.




using(OracleConnection conn = new OracleConnection(@"DataSource=source;User Id=username;Password=password;"))
        OracleCommand cmd = new OracleCommand();
        cmd.Connection = conn;
        cmd.CommandType = CommandType.StoredProcedure;
        cmd.CommandText = "StoredProcedureName";
        cmd.Parameters.Add("REF_CURSOR_SAMPLE",OracleType.RefCursor).Direction =  ParameterDirection.Output;
        OracleDataReader reader = cmd.ExecuteReader();
        FieldInfo fieldInfo = reader.GetType().GetField("m_rowSize", BindingFlags.Instance | BindingFlags.NonPublic);
        int cmdRowSize = (int)fieldInfo.GetValue(reader);
        reader.FetchSize = cmdRowSize * noOfRowsExpected;
             // do your logic here

Troubleshooting Oracle Performance

ODP.NET. The default fetch size of ODP.NET (65,536) is defined in bytes, not rows. You can change this value through the FetchSize property provided by the  OracleCommand and OracleDataReader classes. The following code snippet is an example of how to set the value for fetching 100 rows.

Notice how the RowSize property of the OracleCommand class is used to compute the amount of memory needed to store the 100 rows. The C# program in the RowPrefetching.cs file provides a complete example:

sql = "SELECT id, pad FROM t";
command = new OracleCommand(sql, connection);
reader = command.ExecuteReader();
reader.FetchSize = command.RowSize * 100;
while (reader.Read())
    id = reader.GetDecimal(0);
    pad = reader.GetString(1);
    // process data

As of ODP.NET version,  you can also change the default fetch size through the following registry entry (<Assembly_Version> is the full version number of Oracle.DataAccess.dll)




  pkg_emp.get_emps (l_emp_cur);
  l_cur_no := DBMS_SQL.TO_CURSOR_NUMBER (l_emp_cur);
  DBMS_SQL.DESCRIBE_COLUMNS(l_cur_no, l_col_cnt, l_rec_tab);

  FOR i IN 1..l_col_cnt
     DBMS_OUTPUT.PUT_LINE(‘Column ‘ || l_rec_tab(i).col_name || ‘ max length of ‘ || TO_CHAR(l_rec_tab(i).col_max_len));



      <add name="DllPath" value="N:\ORACLE\ORA11odacR5"/>
      <add name="FetchSize"             value="65536"/>
      <add name="PromotableTransaction" value="promotable"/>
      <add name="StatementCacheSize"    value="0"/>
      <add name="TraceFileName"         value="%TEMP%\odpnet.trc"/>
      <add name="TraceLevel"            value="0"/>
      <add name="TraceOption"           value="0"/>
  <!– Here we’re telling .NET framework two things:
       – which version of Oracle.DataAccess.dll we want to be used
       – and from where exactly it should load the assembly.
       Any version of Oracle.DataAccess.dll between and
       will be replaced by
       Note that publicKeyToken is "hash" dedicated to Oracle Corp. but might
       change in the future. We checked the token against GAC.
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
        <publisherPolicy apply="no" />
        <assemblyIdentity name="Oracle.DataAccess"
                          culture="neutral" />
        <bindingRedirect oldVersion=""

:: RunOraDUAL.bat
:: Oracle11g [] Instant Client with ODAC
:: Batch routine to launch OraDUAL.exe from newtork drive.
:: ———————————————————
title Oracle11g Instant Client – ODAC
start OraDUAL.exe
:: End

Posted in .NET, DataAccess, Oracle, Performance | Etiquetado: , , , | 2 Comments »

Experiencies NLB, WSDL, SSL, HTTPS

Posted by kiquenet en 14 abril 2015

El sitio web está en 2 servidores de preproducción: preiis01 y preiis02. No está configurado SSL.

A la url http://preiis01/ServicioPyS/Service.svc?wsdl puedo acceder e invocar al servicio y funciona bien (todo por http)

La url de https://pruebaslba.xxxx.net/ServicioPyS/Service.svc?wsdl se refiere al Balanceador (F5 creo que es), que permite acceso desde fuera (acceso público), y ahí parece está configurado el SSL. Los detalles no los conozco, es de ámbito de sistemas.

El problema es relativo al WSDL


<wsdl:service name="ActualService">

<wsdl:port name="ActualServiceEndPoint"binding="i0:ActualServiceEndPoint">

<soap:addresslocation="http://preiis01/ServicioPyS/Service.svc/Service.svc" />



  Con la url del WSDL del balanceador el problema que la parte de soap:address location no la genera bien (la pone como http)


<wsdl:service name="ActualService">

<wsdl:port name="ActualServiceEndPoint"binding="i0:ActualServiceEndPoint">

<soap:addresslocation="http://pruebaslba.xxxx.net/ServicioPyS/Service.svc/Service.svc" />



   Por eso la solución rápida que encontré fue modificar el WSDL (singlefile) manualmente  (la parte de soap:addresslocation y poner httpS)  y ponerlo en https://pruebaslba.xxxx.net/ServicioPyS/WSDL/ServiceWsdlModificado.Single.wsdl


<wsdl:service name="ActualService">

<wsdl:port name="ActualServiceEndPoint"binding="tns:ActualServiceEndPoint">




Seguramente hay una forma de hacerlo sin tener que hacer esa modificación manual del WSDL, desconozco esos temas si el sitio web en el IIS no es SSL, y sin embargo en la llamada desde internet a través de Load Balancer se hace a través de SSL.

hay temas de “SSL pass-through” y load balancer. Yo lo desconozco



Error inicial  hasta que se descubrió que sitio IIS no tiene SSL habilitado.


Could not find a base address that matches scheme https for the endpoint with binding WSHttpBinding. Registered base address schemes are [http].


Could not find a base address that matches scheme https for the endpoint with binding BasicHttpBinding. Registered base address schemes are [http].

Googleando, he visto distintas configuraciones e ido probando alternativas para serviceMetadata, bindings, baseAddresses, ..

<serviceMetadata httpGetEnabled="false o true" httpsGetEnabled="true" />



        <binding name="SecureTransportOnly">

          <security mode="Transport">

            <transport clientCredentialType="None" proxyCredentialType="None" realm="" />



        <binding name="NoSecure">

          <security mode="None"><transport clientCredentialType="None"/></security>




        <binding name="SecureTransportOnly">

          <security mode="Transport">

              <transport clientCredentialType="None" proxyCredentialType="None" realm="" />












<endpoint address="mex" binding="mexHttpBinding o mexHttpsBinding" contract="IMetadataExchange"/>

Hasta probé 



            <add  baseAddress="https://pruebaslba.xxxx.net o https://pruebaslba.xxxx.net/ServicioPyS"/>




            <dns value="localhost" />



HOW-TO Configure an IIS-hosted WCF service with SSL

WCF Service namespaces



public const string NAMESPACE = "https://ServicioPyS.xxxx.es";

[ServiceContract(Name = "IActualService", Namespace = Constants.NAMESPACE)]

public interface IActualService

[System.Runtime.Serialization.DataContract(Namespace = Constants.NAMESPACE)]

public class Cabecera

[System.Runtime.Serialization.DataContract(Namespace = Constants.NAMESPACE)]

public enum Respuesta

bindingNamespace and WSDL

Info relativa a configuración SSL, HTTPS y bindings WCF


WCF Security:authentication based on Username/Password – Part II


Otras referencias WCF Security






Could not find a base address that matches scheme https for the endpoint with binding WebHttpBinding. Registered base address schemes are [http]



Posted in .NET | Etiquetado: , , | Leave a Comment »

WCF Quotas and IIS Limits

Posted by kiquenet en 14 abril 2015

Exceptions about Quotas and limits using WCF for uploading files or another method:

System.ServiceModel.EndpointNotFoundException: There was no endpoint listening at http://localhost:8080/oms/FileService.svc that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. —> System.Net.WebException: The remote server returned an error: (404) Not Found.

System.InsufficientMemoryException: Failed to allocate a managed memory buffer of 279620368 bytes. The amount of available memory may be low. —> System.OutOfMemoryException: Exception of type ‘System.OutOfMemoryException’ was thrown.

The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element.

Review “quotas” of Bindings

I can view System.ServiceModel.Diagnostics.MessageNotLoggedQuotaExceeded in the svclog file.

No se registró el mensaje porque el tamaño supera la cuota configurada
Message not logged because its size exceeds configured quota

Request Limits <requestLimits>
Specifies the maximum length of content in a request, in bytes.
maxAllowedContentLength has type uint, its maximum value is 4,294,967,295 bytes = 3,99 gb

        <requestLimits maxAllowedContentLength="209715200" />

209715200 are 200 MB


In addition to maxAllowedContentLength, the maxRequestLength on the server side in your WCF confgiration needs to be increased as well.

  <!–Increase ‘maxRequestLength’ to needed value: 100mb (value is in kilobytes)–>
  <httpRuntime maxRequestLength="102400"/>

the default web site doesn’t override it using:

"%WINDIR%\System32\inetsrv\appcmd.exe" list config "Default web site" -section:requestFiltering

Note about SubStatusCode:

WebExceptionStatus – ProtocolError = 7
HTTP Substatus
404.13 Content Length Too Large
404.14 URL Too Long
404.15 Query String Too Long
404.10 Request Header Too Long

WebException hasn’t SubStatusCode, only HttpException

Useful Troubleshooting Failed Requests – Tracing in IIS

using (ServerManager serverManager = new ServerManager())
   Configuration config = serverManager.GetWebConfiguration("Default Web Site");
   ConfigurationSection requestFilteringSection = config.GetSection("system.webServer/security/requestFiltering");
   ConfigurationElement requestLimitsElement = requestFilteringSection.GetChildElement("requestLimits");
   ConfigurationElementCollection headerLimitsCollection = requestLimitsElement.GetCollection("headerLimits");

   ConfigurationElement addElement = headerLimitsCollection.CreateElement("add");
   addElement["header"] = @"Content-type";
   addElement["sizeLimit"] = 100;


EndpointNotFoundException with InnerExcception (WebException) hasn’t SubStatusCode, only HttpException:

catch (System.ServiceModel.EndpointNotFoundException ex)
               var iex = ex.InnerException;
               if (iex is System.Net.WebException)
                   var msg = "no http status code available";
                   var wex = (iex as System.Net.WebException);
                   if (wex.Status == System.Net.WebExceptionStatus.ProtocolError)
                       var response = wex.Response as System.Net.HttpWebResponse;
                       if (response != null)
                           msg = "HTTP Status Code: " + (int)response.StatusCode;

Bit Calculator








Posted in .NET | Etiquetado: , , , , | Leave a Comment »

Working with Active Directory in C#

Posted by kiquenet en 15 enero 2015


Sysinternals ADExplorer

LDAP Browser (Softerra)

Classes .NET



Helpers and Snippet C#








Ports: 389 , 636


Here is AD user attributes reference(in case you would like to get something else). Note: not all attributes



Posted in .NET | Etiquetado: , | Leave a Comment »

Members, Contributor, Groups, Identities in TFS

Posted by kiquenet en 5 enero 2015

Get members of TFS Group

private List<Identity> ListContributors()
    const string projectName = "<<TFS PROJECT NAME>>";
    const string groupName = "Contributors";
    const string projectUri = "<<TFS PROJECT COLLECTION OR URL TFS SERVER>>";

    TfsTeamProjectCollection projectCollection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(projectUri));
    ICommonStructureService css = (ICommonStructureService) projectCollection.GetService(typeof(ICommonStructureService));
    IGroupSecurityService gss = projectCollection.GetService<IGroupSecurityService>();

    // get the tfs project
    var projectList = css.ListAllProjects();
    var project = projectList.FirstOrDefault(o => o.Name.Contains(projectName));

    // project doesn’t exist
    if (project == null) return null;

    // get the tfs group
    var groupList = gss.ListApplicationGroups(project.Uri);
    var group = groupList.FirstOrDefault(o => o.DisplayName.Contains(groupName));  // you can also use DisplayName or AccountName. AccountName is empty for me.

    // group doesn’t exist
    if (group == null) return null;

    Identity sids = gss.ReadIdentity(SearchFactor.Sid, group.Sid, QueryMembership.Expanded);

    // there are no users
    if (sids.Members.Length == 0) return null;

    // convert to a list
    List<Identity> contributors = gss.ReadIdentities(SearchFactor.Sid, sids.Members, QueryMembership.Expanded).ToList();

    return contributors;

Add SID to a Readers group

# load the required dll

function get-tfs
    [string] $serverName = $(throw ‘serverName is required’)

    $propertiesToAdd = (
        (‘VCS’, ‘Microsoft.TeamFoundation.VersionControl.Client’, ‘Microsoft.TeamFoundation.VersionControl.Client.VersionControlServer’),
        (‘WIT’, ‘Microsoft.TeamFoundation.WorkItemTracking.Client’, ‘Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItemStore’),
        (‘CSS’, ‘Microsoft.TeamFoundation’, ‘Microsoft.TeamFoundation.Server.ICommonStructureService’),
        (‘GSS’, ‘Microsoft.TeamFoundation’, ‘Microsoft.TeamFoundation.Server.IGroupSecurityService’)

    [psobject] $tfs = [Microsoft.TeamFoundation.Client.TeamFoundationServerFactory]::GetServer($serverName)
    foreach ($entry in $propertiesToAdd) {
        $scriptBlock = ‘
            [System.Reflection.Assembly]::LoadWithPartialName("{0}") > $null
        ‘ -f $entry[1],$entry[2]
        $tfs | add-member scriptproperty $entry[0] $ExecutionContext.InvokeCommand.NewScriptBlock($scriptBlock)
    return $tfs
#set the TFS server url
[psobject] $tfs = get-tfs -serverName

$items = $tfs.vcs.GetAllTeamProjects( ‘True’ )
    $items | foreach-object -process {
    $proj = $_
    $readers = $tfs.GSS.ListApplicationGroups($proj.Name) | ?{$_.DisplayName -eq ‘Readers’ }

    $tfs.GSS.AddMemberToApplicationGroup($readers.Sid, ‘TheSidToTheGroupYouWantToAdd’)

var tpc = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://vsalm:8080/tfs/FabrikamFiberCollection"));

            var ims = tpc.GetService<IIdentityManagementService>();

            var tfsGroupIdentity = ims.ReadIdentity(IdentitySearchFactor.AccountName,
                                                    "[FabrikamFiber]\\Fabrikam Fiber Web Team",

            var userIdentity = ims.ReadIdentity(IdentitySearchFactor.AccountName,

            ims.AddMemberToApplicationGroup(tfsGroupIdentity.Descriptor, userIdentity.Descriptor);






Posted in .NET, PowerShell, Scripts, TFS | Etiquetado: , , | Leave a Comment »

Delete TestResults folder (Unit Testing Visual Studio)

Posted by kiquenet en 5 enero 2015

Cleaning up Visual Studio TestResults with PowerShell

If you write your unit tests using Visual Studio, then you know how quickly those pesky "TestResults" folders can eat up precious disk space over time.

Assuming all of your code is collocated in the same parent directory, replace "C:\TFS" with your parent directory.

(get-childitem C:\TFS\* -recurse) | where{$_.name -eq "TestResults"} |% {remove-item $_.fullname -recurse}


Error Handling when deleting folders and files.

Validating contents TestResults folder (subfolders In and Out, *.trx files, …)

$dir = "C:\TFS\"
(get-childitem $dir -recurse) | where { $_.name -eq "TestResults"} | foreach {

   $delete = $false;
   (get-childitem  $_.FullName -recurse) | where {$_.name -eq "Out"}  | foreach {
        $ParentS = ($_.Fullname).split("\")
        $Parent = $ParentS[@($ParentS.Length – 3)]
        if ($Parent -eq "TestResults") {$delete = $true;}


   if ($delete)
        Write-Host -ForegroundColor red ("Eliminando {0}" -f  $_.FullName )
        Remove-item $_.fullname -recurse


Write-Host -ForegroundColor red ("Eliminado TestResults de {0}" -f  $dir )


Posted in .NET, PowerShell, Productividad, Scripts, VisualStudio | Etiquetado: , , | Leave a Comment »

Change Owner Workspaces

Posted by kiquenet en 5 enero 2015

For example, maybe Domain is changed in a company.
We have workspaces with the older domain and older user.

Required change to new domain and new user (NEWDOMAIN\NEWUSER).

TFS 2008: The Team Foundation Server for this workspace does not support changing the owner of a workspace.

Using tf.exe command:

C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE
C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE

Workspace Command

Workspaces Command

Useful commands:

tf workspaces /collection:http://tfsServer:8080 /owner:* /computer:MYCOMPUTER /FORMAT:DETAILED

tf workspace /name:workspacename;workspaceowner /collection:tpcUrl
tf status /collection:tpcUrl /workspace:workspacename;workspaceowner
tf shelve /recursive name;owner *.* /move

tf workspace /new <name> /template:<oldname>;<olduser>

tf workspace /delete PC123;other-user /server:http://tfs:8080

A deleted workspace cannot be recovered.
Workspace ‘PC123′ on server ‘
http://tfs.8080′ has 0 pending change(s).

Are you sure you want to delete the workspace? (Yes/No)

Real usage:

tf workspace /new "MyTeamProject in MYCOMPUTER with owner NEWUSER;NEWDOMAIN\NEWUSER" /template:"MyTeamProject in MYCOMPUTER with owner OLDUSER;OLDERDOMAIN\OLDERUSER" /collection:http://tfsServer:8080

tf workspace /delete /server:http://tfsServer:8080 "MyTeamProject in MYCOMPUTER with owner OLDUSER;OLDERDOMAIN\OLDERUSER"

tf workspace /server:http://tfsServer:8080 "MyTeamProject in MYCOMPUTER with owner NEWUSER;NEWDOMAIN\NEWUSER"

Useful https://tfsworkspacescleaner.codeplex.com/






Posted in .NET, TFS | Etiquetado: , | Leave a Comment »

Manage Credentials programmatically using C#

Posted by kiquenet en 2 enero 2015

Credential Manager




        public void Set_Credentials_for_older_domain_whe_migration_to_new_domain()
            var accesos = new List<string> {

            accesos.ForEach(acceso => SaveCredential(acceso));

        private static Credential SaveCredential(string CredentialName)
            var UserName = @"OLDERDOMAIN\user";
            var Password = "pass";

            var cm = new Credential { Target = CredentialName, Type = CredentialType.DomainPassword };
            if (cm.Exists())
                Console.WriteLine("Credential " + cm.Target + ". Data: " + cm.Username + " " + cm.Password);

                //if (cm.Type == CredentialType.Generic)  cm.Delete();

                return cm;

            cm = new Credential
                Target = CredentialName,
                Type = CredentialType.DomainPassword,
                PersistanceType = PersistanceType.Enterprise,
                Username = UserName,
                Password = Password
            return cm;

Tools useful: PromptForCredentials Builder and Credential Set Manager by Kenny Kerr (Microsoft)

Windows 8 – New API Windows.Security

(new-object Windows.Security.Credentials.PasswordVault).RetrieveAll() | % { $_.RetrievePassword(); $_ }

It only displays data from the Web Credentials store, not the Windows Credentials

The new api has no way to access the Windows Credentials






Posted in .NET, Security, Seguridad | Etiquetado: | Leave a Comment »