How to create data security in SQL server 2005 so that the code will not be opened

U can hide the implementation details of stored procedures from end users of the application using WITH ENCRYPTION option. ENCRYPTION indicates that SQL Server encrypts the syscomments table entry containing the text of the CREATE PROCEDURE statement. The WITH ENCRYPTION clause hides the text of a stored procedure from users.
It is true that, trigger, view and procedure encryption that comes with MSSQL 2008, 2005 is ineffective and easily broken. U may use SQL Shield is a tool for protecting SQL scripts from being viewed by anybody. And it does protect it from decryption. Don't worry, it is free and easy to use.
If u want to hide instance....
Run SQL Server Configuration Manager and select the instance of SQL Server (which u want to hide), right click to select Properties. After selecting properties a window comes. In order to hide this, just set Hide Instance to Yes and click OK or Apply. After changing, again restart the instance of SQL Server.
Remember, The name of the instance is shown in the Local Servers list, but not in the Network Servers list.

SQL Optimization

I. Overview
After learning the fundamentals of SQL, it's fairly easy to create queries that allow you to selectively view the contents of one or more tables. However, even though these queries may have taken very little time to create, you may end up being surprised at how long they can take to execute. In fact, it's very commonplace for the initial delight of SQL to vanish quickly when the first query takes hours to complete! This is due to two factors. First of all, SQL access through an ODBC Driver will never be quite as fast as native manipulation of the database. This makes sense as the ODBC Driver adds another layer (or middleman) to the whole process. The more layers that are added, the slower the process will be. However, it's also very common to over-estimate this impact. Most of the time, the poor performance is actually due to poorly optimized queries rather than a slow ODBC Driver. The lack of optimization in a query is the number one reason for those lengthy queries. It's very possible for an optimized query to run over 6000% faster than a non-optimized query. Therefore, as speed (and therefore usability) is always an issue, it's well worth it to spend some time examining and optimizing your queries. Because it is so important, the process of creating a query should include careful planning up front and optimization verification before the process is complete.
Because there are only a few techniques for optimization queries, it would seem to be an easy job at first. After all, there are only a few things to check, right? Although that's correct, some of the techniques can be very time-consuming and require a good understanding of the tables, Data Dictionary, and index theory.
The techniques for optimization are outlined below.
II. Ensuring Proper Use of Indices Effectively using a table's indices is probably the most difficult technique to employ. However, it also produces the best results and is the most important technique for this reason. Before continuing, it's worthwhile to quickly go over an index's purpose.
Index overview: An index, or a key, is a field or combination of fields used to identify a record. There are two main types of indexes - a primary index and a secondary index. A primary index uniquely identifies a record. In other words, no other record in the table will have that same index. A non-unique or secondary index does not uniquely identify a record. Two classic examples illustrate the point. A customer number field uniquely identifies the customer, as no two customers can have the same customer number. Therefore, the customer number field can be used for the primary index. A customer's Zip Code, on the other hand, is not unique. This makes sense, as many customers can be located within the same Zip Code. Therefore, the customer's Zip Code may be used as a secondary index.
Now that we've covered the two types of indices, what are they used for? In general indices are used to enable the database engine to find and sort records quickly. An index allows the engine to jump directly to where the data is located, as opposed to reading every single record to see if it matches your search criteria. As an example, most books have an index. A book's index allows you to look up a particular word, and informs you of every page that the word exists on. Therefore, using a book's index allows you to quickly find all occurrences of that particular word in the book. It's much quicker to find the word this way than it would be to read the entire book, marking every page that contains the word. It's very similar with databases. An index allows the database engine to find records quickly, as it enables the engine to jump to the locations in the file that contain a desired record. Without the use of indices, the engine would have to read every record to see if it matched the required criteria. Armed with this knowledge, it's easy to see that indices are imperative for good performance of a database, particularly with a relational database or when dealing with a very large table.
Indices can vastly improve the speed of database operations - but how and when they are implemented plays an integral part in realizing performance gains. Because their purpose is to enable the database engine to find records and sort them quickly, the likely candidates for indexing are those fields that are frequently searched, sorted, or used to join tables. However, it's still a good idea to be somewhat judicious with the choice of fields to index. Whenever a record is added or modified, the database engine has more work to do if there are many indexed fields. This is because the indices must be updated every time this sort of action takes place. If most of the table access is spent querying, this won't impact performance. However, if the table's records are frequently modified, there should be a reasonable limit to the number of fields indexed.
Indices may be comprised of a single field, or multiple fields. Which type of index to use should be determined by the types of SELECTs, ORDER BYs, and JOINs that are used most often. In other words, multiple field indices are useful when searching on multiple fields (as in a LastName+FirstName example), and single field indices are useful when searching on a single field (as in a Customer_Number example).
Once the table's indices have been defined, queries to the table can be optimized to take advantage of these indices. Generally speaking, all of your SQL queries should be optimized so that the database engine can use existing indices to find, sort, and join records and tables quickly. This means that indexed fields should always be used in the SQL statement's WHERE and ORDER BY clauses. If the WHERE clause references an index, the records can be retrieved from the file quickly, as in the book's index example. If the result set is to be sorted, then the ORDER BY clause should also reference an indexed field, as the data will be sorted much more quickly that way.
After the query has been created, the next step is to execute it and verify that the database engine was able to optimize the query by using the proper indices. With the BASIS ODBC Driver, this process is accomplished by examining the log file after a query has been performed. To illustrate this, we can perform both an indexed and non-indexed query:
Indexed query:
SELECT * from CALL_LOG where ID > '0000022000' and TIME < 9
This will select all of the records from the CALL_LOG table where the call came in before 9:00 AM and the call ID is greater than '0000022000' Because the primary index is based on the call ID, the ODBC Driver can retrieve the records quickly. Here's the excerpt from the log file:
OPTIMIZATION STRATEGY:
    (file 1) = f:/odbc/Local_Call_Hist/data/call_log
    order_knum=-1
    [Selected] Predicate: 1 constraints
    *!: (file 1) (knum=0, kseg=1) ID (bracket head)
    Predicate: 1 constraints
    * : (file 1) (knum=-1, kseg=-1) ID (no bracketing)
Note that the log file contains several pieces of information to indicate the optimization strategy used. First of all, it says [SELECTED] to indicate that it was able to select an index to optimize on. Secondly, it lists two possible options to optimize off of - one for the ID field and one for the TIME field. The ID field is listed first, and the knum=0, kseg=1 indicates that it is the primary key (as it's the first key in the first keychain). The second line listed is for the TIME field. As its knum and kseg are -1, this indicates that there aren't any indices that are based on the TIME field. In the above example, the ODBC Driver used the primary index based on the call ID field to optimize on. This is indicated by the '!' preceding its line.
Non-Indexed query:
SELECT * from CALL_LOG where TIME < 9
This will select all of the records from the CALL_LOG table where the call came in before 9:00 AM. As there aren't any indexes based on the time field, the ODBC Driver isn't able to optimize the query. Here's the excerpt from the log file:
OPTIMIZATION STRATEGY:
    (file 1) = f:/odbc/Local_Call_Hist/data/call_log
    order_knum=-1
    Predicate: 1 constraints
    * : (file 1) (knum=-1, kseg=-1) ID (no bracketing)
Note that it doesn't have a [SELECTED] or a '!' to indicated a selected index.

A more complex example:

SELECT CALL_TRN.TECH CALL_TRN.TRN_DATE from CALL_LOG, CALL_TRN where CALL_LOG.ID between '0000020000' and '0000022000 'and CALL_LOG.TIME < 9 and CALL_LOG.TECH='ABC' and CALL_LOG.ID=CALL_TRN.ID order by CALL_TRN.TECH
This query is more complex than the previous examples as it does a join on the CALL_LOG and CALL_TRN tables, it references the CALL_LOG.ID field twice (for the between), and it does an order by. Here's the log file excerpt:
OPTIMIZATION STRATEGY:
    (file 1) = f:/odbc/Local_Call_Hist/data/call_log
    order_knum=-1
    [Selected] Predicate: 1 constraints
    *!: (file 1) (knum=0, kseg=1) ID (bracket head)
    Predicate: 1 constraints
    * : (file 1) (knum=0, kseg=1) ID (bracket tail)
    Predicate: 1 constraints
    * : (file 1) (knum=-1, kseg=-1) ID (no bracketing)
    Predicate: 1 constraints
    * : (file 1) (knum=0, kseg=1) ID (partial key: knum=0)

    (file 2) = f:/odbc/Local_Call_Hist/data/call_trn
    order_knum=-1
    [Selected] Predicate: 2 constraints
    : (file 1) (knum=0, kseg=1) ID (primary key)
    *!: (file 2) (knum=0, kseg=1) ID (partial key: knum=0)
The logfile shows that the ODBC Driver did more work with this query in order to figure out the best field to optimize on. In fact, it checked almost every field mentioned in the query. The first two entries for the first file relate to the ID field. There are two entries for this field, a bracket head and a bracket tail. This is because the field was used twice for comparison. Additionally, note that that all entries list the primary index (ID) after their knum and kseg portion to indicate the primary key. The third entry could not be used as there isn't an index associated with that field, as mentioned above. The fourth entry says that it could have chosen to do a partial key read based on the primary key. For the second file, it lists two entries. The first is the primary key for the first table. The second entry (which was picked) shows that it was able to do a partial key read off of the primary key. Note that this is quite a bit different from the first table. For the first table, it keyed directly off of ID, but it says that it can only to a partial key read off of ID for the second table. The reason for this is that the primary key for the second file is based off of multiple fields. So, since CALL_TRN's primary index was comprised of the ID field followed by two others, the ODBC Driver couldn't key directly off of the primary key. Instead, it could do a partial key read because the query referenced the first portion of the primary key.
Comparison of indexed vs. non-indexed queries: To demonstrate the importance of indices, two identical queries were performed on a table and timed. The only difference between the tests is that the first query referenced the primary index, and the second query did not reference any indices. Here is the query:
select * from CALL_TRN where ID > '0000022670'
To query on the primary index, the table was defined in the Data Dictionary to have an index for the ID field, and the physical data file used the ID field as its primary key. Before the query was run a second time, the Data Dictionary was modified so that the table's index used a different field. The physical data file was also re-written so that the ID field was not specified as a key. Here are the results:
Query on indexed field: 0.25 seconds
Query on non-indexed field: 14.71 seconds
This means that the indexed query was 5884% faster than the non-indexed query. Quite impressive, isn't it? It's worthwhile to mention that this query was structured specifically to provide a large discrepancy in the result times. In other words, performance increases aren't going to always be in the 6000% range - it can vary greatly depending on the query and the data in question. For this example, the select will only return records where the record ID is greater than '0000022670'. When looking at the data file, it turns out that only 239 records out of 48,145 match this criterion. This means that the query will only return about one half of one percent of the records - a fairly small amount. As the indexed query can jump directly to the records in the data file that match the criterion, it only has to deal with 239 records. The non-indexed query, on the other hand, has to wade through 48,145 records to see if they match the criterion.
Multiple-field indices: If the index for a table is comprised of multiple fields that have been concatenated together, then some optimization can take place if one rule is adhered to:
In order to include a field that is part of the index, you have to include every field before it
As an example, let's say the index is comprised of three fields: Customer_Number, Last_Name, and First_Name. The ODBC driver could not perform any optimizations if the WHERE clause just included the Last_Name field. If a query has to be done on the Last_Name field, then the Customer_Number field should be included as well. Likewise, if the query used the First_Name field it should include the other two in order to ensure proper optimization.
What about adding indices to a Single Keyed MKEYED file? With a multi-keyed MKEYED file, it possible to have several indices defined. However, with a single-keyed MKEYED file, only one index is allowed. This means all queries run against that table must reference one of the fields referenced by the primary key according to the rule outlined above. If a query is executed that does not follow that rule, no optimization can take place. There are two possible workarounds to this problem. The first is to start using multi-keyed MKEYED files instead, as they are not limited to a single index. The second method involves creating another file. This new file will have an index defined using the first table's non-indexed field and one of its indexed fields. The idea is that the new query can do a join on the old table and the new table. The select will reference the desired field in the new table, which is part of its index. As it will also return the value of an indexed field, this information can be used to find the desired records in the original table via the index.
Structuring the query properly: Even though the query may be constructed to take advantage of indexed fields, there are two other parameters that determine how quickly the query will be able to be performed. The two parameters are the order of the fields in the WHERE clause, and how simple the comparison clause is. This is best demonstrated by an example. The goal of the query is to find all of the information for each call during a particular data range where the call is of a particular priority level. Here's the first query:
select * from hist_log where PRIORITY= '4' and CALL_DATE between {d '1997-07-07'} and {d '1997-07-07'}
OPTIMIZATION STRATEGY:
    (file 1) = f:/odbc/Local_Call_Hist/data/hist_log
    order_knum=-1
    [Selected] Predicate: 1 constraints
    *!: (file 1) (knum=7, kseg=1) ID (Alternate key: knum=7)
    Predicate: 1 constraints
    * : (file 1) (knum=2, kseg=1) ID (bracket head)
    Predicate: 1 constraints
    * : (file 1) (knum=2, kseg=1) ID (bracket tail)
From the log file, we can tell that it chose the PRIORITY field to optimize on. For this particular file, both the PRIORITY field and the CALL_DATE field are secondary indices. The ODBC Driver chose the PRIORITY field for two reasons
  1. That field was listed first in the WHERE clause
  2. The comparison for that field was much simpler than for the CALL_DATE field
The latter reason is particularly important. Because the CALL_DATE field was used for a more complex comparison (thanks to the BETWEEN), the PRIORITY field was chosen for optimization. This is made clearer by the following query:
select * from hist_log where CALL_DATE between {d '1997-07-07'} and {d '1997-07-07'} and PRIORITY = '4'
OPTIMIZATION STRATEGY:
    (file 1) = f:/odbc/Local_Call_Hist/data/hist_log
    order_knum=-1
    Predicate: 1 constraints
    * : (file 1) (knum=2, kseg=1) ID (bracket head)
    Predicate: 1 constraints
    * : (file 1) (knum=2, kseg=1) ID (bracket tail)
    [Selected] Predicate: 1 constraints
    *!: (file 1) (knum=7, kseg=1) ID (Alternate key: knum=7)
In this case, the PRIORITY field was listed last in the WHERE clause, but was still chosen over the CALL_DATE field as the comparison was much simpler.
Now, what does all of this have to do with optimization? The answer is that even though these two indices appear equal (as they are both secondary indices), we can get far better performance by optimizing off of the CALL_DATE field. The reason is simple - there are thousands of records in the data file with a priority of '4', but only a couple dozen within the specified date range. In other words, if the optimization is based on the PRIORITY field, the ODBC Driver will end up reading and comparing thousands of records to the specified date. However, if the optimization is based on the CALL_DATE field, the ODBC Driver will read and compare about 30 records to the specified priority level. To put this in perspective, the query is tried a third time:
select * from hist_log where CALL_DATE between {d '1997-07-07'} and {d '1997-07-07'} and PRIORITY like '4%'
OPTIMIZATION STRATEGY:
    (file 1) = f:/odbc/Local_Call_Hist/data/hist_log
    order_knum=-1
    [Selected] Predicate: 1 constraints
    *!: (file 1) (knum=2, kseg=1) ID (bracket head)
    Predicate: 1 constraints
    * : (file 1) (knum=2, kseg=1) ID (bracket tail)
    Predicate: 1 constraints
    * : (file 1) (knum=-1, kseg=-1) ID (no bracketing)
    Predicate: 1 constraints
    * : (file 1) (knum=7, kseg=1) ID (bracket head)
    Predicate: 1 constraints
    * : (file 1) (knum=7, kseg=1) ID (bracket tail)
This time, the query was able to 'force' the ODBC driver into selecting the CALL_DATE field for optimization. To accomplish this, the CALL_DATE field was listed first in the WHERE clause. However, as demonstrated in the second query, that wasn't quite enough. The final step was to make the PRIORITY field look much less attractive to the ODBC Driver. This was done by making the comparison more complex - it was changed from a equals to a like. Because of this, the CALL_DATE field was chosen for optimization. As mentioned above, this can drastically reduce the amount of work that the ODBC Driver has to do. Here are the timing results:
Query optimized on PRIORITY: 3.25 seconds
Query optimized on CALL_DATE: 0.05 seconds
The timings indicate that tuning the query resulting in a 6500% increase in speed - quite a dramatic improvement! In fact, the performance gain was better with this query than it was for the query that was comparing an indexed and non-indexed WHERE clause. This is important because it points out that even though a query may be referencing and optimizing off of indexed fields, it still may not be performing as well as it could be. This type of tweaking can have profound effect on the speed of the query, but also requires a great deal of knowledge about the table and the data that is being queried. The previous example illustrated that nicely - the performance gain was only realized because the ODBC Driver was forced to optimize on a particular field. This means that the user had to do some extra work (and thinking!) to make the ODBC Driver optimize off of a particular field. Additionally, the user knew ahead of time that the CALL_DATE field would be a far better choice, as it would substantially limit the amount of data that the ODBC Driver had to deal with. Without a good knowledge of the table's contents, this type of optimization would have never taken place.
Review: To recap, referencing indexed fields in your queries is the most effective way to ensure fast performance. If frequently run queries can't be optimized because they don't reference indexed fields, it may be worthwhile to restructure the table by adding indices to ensure faster performance. Note that this is normally a two-step process. The first step is to create the index in the Data Dictionary, and the next is to re-key the physical BBx data file so that it has the new field defined as a key. When done via PRO/5, this involves creating a new file with a new key definition in the MKEYED statement that includes an index for the desired field. Once the new file has been created, a short program must be run - one that reads all of the records from the old file and writes them into the new file. With the BASIS ODBC Driver, however, it is possible to complete this operation in a single step via the CREATE INDEX sql statement. When this type of sql statement is executed, the ODBC Driver will not only modify the Data Dictionary to include the new index, but it will also re-key the file.
III. Limiting the Amount of Data Returned A often-overlooked way of improving query times is to only return the data that is actually needed. It's very common to browse the entire table, or simply do a 'SELECT * from table'. While it may sometimes be necessary to view every single field of every single record, in many cases it's done just because it's easier. Regardless of whether all of the data is going to be used or not. Therefore, before doing a query it's wise to plan out what fields and records are actually needed, and have the query return only the necessary data. This means that the WHERE clause should be constructed to filter out as much un-needed data as possible. Additionally, if only a portion of the fields are required, stay away from the common 'SELECT * ...' type of query. Instead, have the SELECT reference only the necessary fields, resulting in much less data being processed and faster queries. This is especially important when the table has a large number of columns, or columns that are defined as large character fields. If some of these columns are not necessary for the result table, eliminating them from the query will result in much less data being transferred - which translates into faster queries. Following is an example that demonstrates this:
The goal of this query is to find out all of the call reference numbers that were handled by a particular technician. This query could be accomplished two ways:
select * from hist_log where TECH = 'ABC'"

select id from hist_log where TECH = 'ABC'"
The first way asks for every field for each record returned. The goal of the query was just to find out which calls were taken by the technician, though, so only the ID field needs to be returned. The second query takes advantage of that fact. Here are the result times:
Selecting all fields: 8.09 seconds
Selecting the ID field: 1.28 seconds
In this case, retrieving only the necessary data made the query run 632% faster!
IV. SQLPASSTHROUGH Mode If at all possible, a SQLPASSTHROUGH type query should be performed. This is usually an option for the type of query that the 3rd party application (like MS Access or MS Visual Basic) will perform. With this type of query, the BASIS ODBC Driver is in charge of optimizing the query instead of the other application. As the ODBC Driver has a better knowledge of the PRO/5 file's indices and its own capabilities for optimization, it will usually do a much better job of selecting and sorting data than the other application would.
V. Data Source Configurations The 'Fast Connect' option can help reduce query time, especially with larger Data Dictionaries. Turning on this option does several things to improve the connection time to the data source. The option that makes the biggest time difference is the 'No Shadow Data Dictionary Consistency Check', which is also available as a separate option.
The shadow DD files represent a form of the Data Dictionary that is customized for the ODBC Driver. They present the data in a format that the ODBC Driver is more readily able to digest. They store type information about the DD and information about the data types that are supported. They also leave out a lot of information from the DD that is Taos-specific. Because of this, the shadow DD files are typically much smaller than the full-blown DD files.
When a BASIS ODBC Data source is initially connected to, the ODBC Driver will read information from the Shadow DD files to find out information about the database's tables, fields, indices, etc. If neither the 'No Shadow Data Dictionary Consistency Check' or the 'Fast Connect' options are checked, then the ODBC driver will re-create the Shadow DD files. This means that it will read the entire DD, figure out all of the pertinent information for the Shadow DD files, then create and populate the Shadow DD files. Needless to say, this process can take some time. The larger the Data Dictionary, the longer it takes to create the Shadow DD files. So, if the ODBC Driver is allowed to bypass the create process, it will save quite a bit of time on the initial connect. The Shadow DD files only need to be recreated if the original DD changes. In general, the Shadow DD files shouldn't be created on every connection to the Data Source. Overall, it's best (from the performance standpoint) to only recreate the Shadow DD files when necessary. Therefore, this option should only be turned off if the original DD was modified (for example, if tables, indices, or fields have been added, dropped or modified).
VI. Data Server versus Mapped Drives If the data that is being queried resides on a remote machine, it's usually possible to access it either via the BASIS Data Server or via a mapped drive. When possible, the BASIS Data Server should be used instead of a mapped drive, as queries will be several times faster via the Data Server. This is because the Data Server can reduce the amount of data that is sent over the network. When reading information from a share (a mapped drive), protocol demands that locking information be sent back and forth so that the ODBC driver can read the information. However, when the Data Server is used, it handles the locking calls. This means that all of that handshaking is done locally (to that remote machine), and the Data Server will send far less information over the network. As the network link is relatively slow, reducing the amount of information passed through this channel can greatly improve response times.
VII. Machine Speed ODBC queries can be quite demanding on the workstation's resources. In particular, the machine's CPU speed and hard disk speed (for local queries and ORDER BYs) can have a large impact on the length of time it takes for a query to complete. As a test, a report consisting of ten queries was run from two machines. The first machine was a 486 DX2-66 and it took 117 seconds to complete the report. The second machine was a Pentium 133 and it took 56 seconds to complete. For this test, both machines were on the same network so that didn't play a part in the speed increase, which was more than 200%. The deciding factor for this test was the CPU and hard disk speed. This is due to the fact that the queries polled large amounts of data, and ORDER BYs and GROUP BYs were used.

difference between wcf and web services

What WCF Can Replace?Windows Communication Foundation (WCF) has an ASP.NET compatibility mode option to enable WCF applications to be programmed and configured like ASP.NET Web services, and mimic their behavior.

Major Difference is That Web Services Use XmlSerializer But WCF Uses
DataContractSerializer which is better in Performance as Compared to XmlSerializer.
Key issues with XmlSerializer to serialize .NET types to XML

* Only Public fields or Properties of .NET types can be translated into XML.
* Only the classes which implement IEnumerable interface.
* Classes that implement the IDictionary interface, such as Hash table can not be serialized.

Important difference between DataContractSerializer and XMLSerializer.

* A practical benefit of the design of the DataContractSerializer is better performance over Xmlserializer.
* XML Serialization does not indicate the which fields or properties of the type are serialized into XML where as DataCotratSerializer Explicitly shows the which fields or properties are serialized into XML.
* The DataContractSerializer can translate the HashTable into XML.



More thing we can host the WCF services in number of hosting applications, unlike Asp.net webservices, which support only iis hosting.
............................................................................................................................................................
WCF "web services" are part of a much broader spectrum of remote
communication enabled through WCF. You will get a much higher degree of
flexibility and portability doing things in WCF than through traditional
ASMX because WCF is designed, from the ground up, to summarize all of the
different distributed programming infrastructures offered by MS. An endpoint
in WCF can be communicated with just as easily over SOAP/XML as it can over
TCP/binary and to change this medium is simply a configuration file mod. In
theory this reduces the amount of new code needed when porting or changing
business needs, targets, etc.

ASMX is older than WCF, and anything ASMX can do so can WCF (and more).
Basically you can see WCF as trying to logically group together all the
different ways of getting two apps to communicate in the world of MS; ASMX
was just one of these many ways and so is now grouped under the WCF umbrella
of capabilities.

Advantages of the WCF

Advantages of the WCF Service over each distributed communication technologies like – .NET Remoting /MSMQ/COM+/Web Service



WCF is interoperable with other services in compared to .Net Remoting where the client and service have to be .Net. WCF services provide better reliability and security in compared to ASMX web services.



•             WCF does not replace MSMQ, COM+ or Web Services. so it doesn’t make sense to compare them in the manner you’ve laid out. WCF provides a declarative infrastructure for all forms of communication to and from the Windows platform. It is inclusive of nearly all of the technologies that you asked about rather than being in competition with them (remoting being a noted exception). Using WCF, communications happen at designated service endpoints, and an endpoint can implement different protocols (such as Web Service (presumably you meant asmx style here), WS-* SOAP style services), encodings (such as Text, Binary & MTOM), and transports (such as MSMQ, Http, etc.). Services can be accessible through multiple endpoints, thus providing the capability for a single service to be simultaneously accessed via multiple protocols and transports. With .net 3.5, it becomes even easier to provide Restful endpoints (JSON, POX, RSS, and ATOM). Building services that can be consumed by a web client as JSON while simultaneously being accessed by a business partner through the internet using the SOAP protocol (with its full compliment of WS-* implementations such as WS-Security, WS-ReliableMessaging, etc.) is very powerful leverage.



.NET Remoting is an obsolete technology (no improvements since .net 1.1, and none are planned).

•             The advantages of WCF are clearly evident once we see the power it delivers to products not only during development, but also in implementation and maintenance.



Let me explain as to what I am trying to convey:

1.             WCF clearly demarcates development and implementation. All the development part lies in the code while the implementation like the binding you choose or the security model you embrace can be handled in the config file. COM+ or related technologies will not give you this advantage. This is what WCF means when it says they reduce plumbing time. Developer concentrates on what he wants to achieve and WCF takes care of delivering it to the outer world in the most reliable, secure and high performance fashion. WCF also scores over remoting by giving out of box solutions to publish code as Web services. The out of box solutions are simpler to implement and have less overheads.

2.             WCF opens up communication with other applications communicating from various platforms. The fast development power of .net combined with the fast delivery power of WCF makes the market reach time very minimal. Along with this when we add inter operatability, WCF simply reaches out do some work to achieve this but in WCF, all that we have to do is to enable trace by changing the config file and WCF starts providing traces for you.

3.             Further. This statement doesn’t mean that a WCF service runs on Linux but can talk to any app written based on certain set of protocols specified by WCF. In the Web world this means it is available on every major browser. With other technologies, achieving the same requires lots of tweaking but it is available out of box when you embrace WCF.

4.             Hosting a WCF service is very simple on IIS 7 or WAS in Vista or WIN 2K8. In WIN 2K3 and WIN XP SP2 it can be hosted on IIS 6 for the HTTP mode and as a  WIndows service or even self hosting will do.  To achieve the same too it takes minimal efforts.

5.             WCF has an integrated Logging mechanism that helps you log traces that become a boon during maintanence. In other technologies, developer has to Security mechanism in WCF is fairly simple and out of box when you look from the implementors perspective but is very robust and highly secure. The best part is that for the highy used and recommended bindings, WCF provides default security which can be trusted to the core. The message security on WSHTTPbinding is an example on these lines.

Over all to conclude I would say if you are with MS technologies and plan to stick to same in the long run, then WCF is something you should not miss to embrace. The performance and other data points that I convey here are my personal experience during integration of our product (which is 11 years old and has almost all flavours of MS technologies) with WCF.



Moreover it is a framework to develop Service Oriented applications under Microsoft platform mixing both message and RPC style of programming. Through this we can achieve platform optimization through binary as well as open standards-based communications.

Can we have two web.config files for a web application.

Introduction

I would like to share what I have understood about working with more than one Web.config file from my latest ASP.NET application. We planned to have different Web.config files for sub-folders in the application root folder. It helps us to have small and easily maintainable configuration files.

Hierarchy of Web.config Files

System wide configuration settings are defined in the Machine.config for the .NET Framework. The Machine.config file is located in the C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\CONFIG folder. The settings defined in this file are applicable to all ASP.NET applications in that system.
We can override these default settings by including a Web.config file in the application's root folder.
By including Web.config files in sub-folders, we can override the settings defined in the Web.config file in the application's root folder.
The following are sample section declarations from a Machine.config file:
Collapse
<section name="processModel" 
  type="System.Web.Configuration.ProcessModelConfigurationHandler, 
        System.Web, Version=1.0.5000.0, Culture=neutral, 
        PublicKeyToken=b03f5f7f11d50a3a" 
  allowDefinition="MachineOnly"/>
 
<section name="sessionState" 
  type="System.Web.SessionState.SessionStateSectionHandler, 
        System.Web, Version=1.0.5000.0, Culture=neutral, 
        PublicKeyToken=b03f5f7f11d50a3a" 
  allowDefinition="MachineToApplication"/>
 
<section name="appSettings" 
  type="System.Configuration.NameValueFileSectionHandler, System,
        Version=1.0.5000.0, Culture=neutral, 
        PublicKeyToken=b77a5c561934e089"/>
There is an attribute allowDefinition specified in the first two section declarations with the values: MachineOnly and MachineToApplication.

What it does mean?

If allowDefinition="MachineOnly", then we can not override this section either in application level or in folder level. The only section declared in the Machine.config file with this settings is processModel.
If allowDefinition="MachineToApplication", then we can override these sections by the root directory Web.config. Sections with this setting in Machine.config are authentication, machineKey, sessionState, trust, and securityPolicy.
If allowDefinition attribute is omitted in a section declaration of the Machine.config file, we can override that section at any level.
We can override the section appSettings at any level and can access it by using ConfigurationSettings.AppSettings easily.

What is there in the sample project?

The sample source code is a simple ASP.NET web application with three Web Forms and three Web.config files. The root folder has a sub folder SubFolder_L1 that has SubFolder_L2, each has one Web Form and one Web.config file.
Web.config files have different and overridden keys. Web.config file in the root folder has the following appSettings declarations:
Collapse
<appSettings>
    <add key="root" value="Root folder's configuration file."/>
    <add key="color" value="Blue"/>
</appSettings>
Web.config file in Subfolder_L1 has the following appSettings declarations:
Collapse
<appSettings>
    <add key="subfolder_l1" value="Subfolder_L1\web.config file."/>
    <add key="color" value="Green"/>
</appSettings>
The color setting is overridden by the subfolder configuration file. We can read the root element from Subfolder_L1 or Subfolder_L2 by the following code:
Collapse
lblConfig.Text = ConfigurationSettings.AppSettings["root"];
But we can not read configuration settings defined in Subfolder_L1's Web.config file from the root folder.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
......................................................................................................................................................................


Based on my understanding, you have two web.config files. One is in the root directory and another is in the sub-directory. If I have misunderstood it, please let me.
For this scenario, the root cause is some elements in web.config don't allow sub-directory level. All of those elements have an attribute: allowDefinition="MachineToApplication". It means we can only define those elements in machine level and root application level.
To solve this issue, please remove the elements (in the sub-directory web.config) which can result in this problem. For example: membership, roleManager, sessionState, authentication, and so on. They can automatically inherits from the root web.config

How do you read configuration data from web.config or app.config file in .NET ?

You can extend ASP.NET configuration settings with XML configuration elements of your own. To do this, you create a custom configuration section handler. The handler must be a .NET Framework class that inherits from the System.Configuration.ConfigurationSection class. The section handler interprets and processes the settings that are defined in XML configuration elements in a specific section of a Web.config file. You can read and write these settings through the handler's properties.

To create a custom configuration section handler

  1. Create a public class that inherits from the System.Configuration.ConfigurationSection class.
  2. Add code to define the section's attributes and elements.
    The following example shows how to create a handler for a custom configuration section named "PageAppearance." The section has a RemoteOnly attribute and Font and Color elements. The code shows how to use string, integer, and Boolean attributes. It also shows how to use string and integer validators for those attributes. (The Boolean attribute is automatically validated.) The validators check the format of the configuration markup at run time and throw exceptions if the values provided for the custom attributes do not meet the specified criteria.
    using System;
    using System.Collections;
    using System.Text;
    using System.Configuration;
    using System.Xml;
    
    namespace Samples.AspNet
    {
        public class PageAppearanceSection : ConfigurationSection
        {
            // Create a "remoteOnly" attribute.
            [ConfigurationProperty("remoteOnly", DefaultValue = "false", IsRequired = false)]
            public Boolean RemoteOnly
            {
                get
                { 
                    return (Boolean)this["remoteOnly"]; 
                }
                set
                { 
                    this["remoteOnly"] = value; 
                }
            }
    
            // Create a "font" element.
            [ConfigurationProperty("font")]
            public FontElement Font
            {
                get
                { 
                    return (FontElement)this["font"]; }
                set
                { this["font"] = value; }
            }
    
            // Create a "color element."
            [ConfigurationProperty("color")]
            public ColorElement Color
            {
                get
                {
                    return (ColorElement)this["color"];
                }
                set
                { this["color"] = value; }
            }
        }
    
        // Define the "font" element
        // with "name" and "size" attributes.
        public class FontElement : ConfigurationElement
        {
            [ConfigurationProperty("name", DefaultValue="Arial", IsRequired = true)]
            [StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;'\"|\\", MinLength = 1, MaxLength = 60)]
            public String Name
            {
                get
                {
                    return (String)this["name"];
                }
                set
                {
                    this["name"] = value;
                }
            }
    
            [ConfigurationProperty("size", DefaultValue = "12", IsRequired = false)]
            [IntegerValidator(ExcludeRange = false, MaxValue = 24, MinValue = 6)]
            public int Size
            {
                get
                { return (int)this["size"]; }
                set
                { this["size"] = value; }
            }
        }
    
        // Define the "color" element 
        // with "background" and "foreground" attributes.
        public class ColorElement : ConfigurationElement
        {
            [ConfigurationProperty("background", DefaultValue = "FFFFFF", IsRequired = true)]
            [StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;'\"|\\GHIJKLMNOPQRSTUVWXYZ", MinLength = 6, MaxLength = 6)]
            public String Background
            {
                get
                {
                    return (String)this["background"];
                }
                set
                {
                    this["background"] = value;
                }
            }
    
            [ConfigurationProperty("foreground", DefaultValue = "000000", IsRequired = true)]
            [StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;'\"|\\GHIJKLMNOPQRSTUVWXYZ", MinLength = 6, MaxLength = 6)]
            public String Foreground
            {
                get
                {
                    return (String)this["foreground"];
                }
                set
                {
                    this["foreground"] = value;
                }
            }
    
        }
    
    }
    
    
    
    This example uses the declarative model. A configuration section handler can also be implemented programmatically. For an example, see Classes Used to Create Custom Section Handlers and the System.Configuration.ConfigurationSection class overview.

To add a custom section handler to an ASP.NET configuration file

  1. In the Web.config file, add a sectionGroup element and a section element inside the configSections element, as shown in the following example. The declaration associates the custom section handler with the section name.
    NoteNote
    Nesting a section element in a sectionGroup is optional, but we recommend doing this to help organize configuration data.
    <configuration>
    <!-- Configuration section-handler declaration area. -->
      <configSections>
        <sectionGroup name="pageAppearanceGroup">
          <section 
            name="pageAppearance" 
            type="Samples.AspNet.PageAppearanceSection" 
            allowLocation="true" 
            allowDefinition="Everywhere"
          />
        </sectionGroup>
          <!-- Other <section> and <sectionGroup> elements. -->
      </configSections>
    
      <!-- Configuration section settings area. -->
    
    </configuration>
    
    You can add the section-handler declaration in a different configuration file than the one where you add the custom configuration elements, providing that the configuration file where the section handler is declared is higher in the configuration file hierarchy. If you add the section handler declaration to a configuration file that is outside of your application, you must do the following:
    • Include the assembly that defines the section in the same directory as the Web.config file.
    • Ensure that the type attribute of the section element matches the manifest of the assembly (ensure that you specify both the correct namespace and type name).
    If either of these conditions is not fulfilled, a configuration error will be thrown. For more information, see ASP.NET Configuration File Hierarchy and Inheritance.
  2. Add custom configuration elements in the configuration section settings area of the Web.config file, as shown in the following example:
    <configuration>
    
    <!-- Configuration section-handler declaration area. -->
    
      <!-- Configuration section settings area. -->
      <pageAppearanceGroup>
        <pageAppearance remoteOnly="true">
          <font name="TimesNewRoman" size="18"/>
          <color background="000000" foreground="FFFFFF"/>
        </pageAppearance>
      </pageAppearanceGroup>
    
      <!-- Other configuration settings, such as system.web -->
    
    </configuration>
    

To programmatically access custom configuration data

  • Get an instance of the custom configuration object and use the GetSection method or the GetSection method to populate it.
    The following example shows an ASP.NET Web page that works with the previous examples to enumerate the attributes and child elements of the custom configuration section.
    <%@ Page Language="C#" %>
    
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    
    <script runat="server">
      protected void Page_Load(object sender, EventArgs e)
      {
        Samples.AspNet.PageAppearanceSection config =
            (Samples.AspNet.PageAppearanceSection)System.Configuration.ConfigurationManager.GetSection(
            "pageAppearanceGroup/pageAppearance");
    
        Response.Write("<h2>Settings in the PageAppearance Section:</h2>");
        Response.Write(string.Format("RemoteOnly: {0}<br>", 
            config.RemoteOnly));
        Response.Write(string.Format("Font name and size: {0} {1}<br>",
            config.Font.Name, config.Font.Size));
        Response.Write(
            string.Format("Background and foreground color: {0} {1}<br>",
            config.Color.Background, config.Color.Foreground));
      }
    </script>
    
    <html xmlns="http://www.w3.org/1999/xhtml">
    <head runat="server">
      <title>Custom Configuration Section Example</title>
    </head>
    <body>
      <form id="form1" runat="server">
      <div>
        <h1>
      </div>
      </form>
    </body>
    </html>
    
    
    

How do you encrypt connection string?


This article is to give an idea how we can achieve security in web.config file, its a basic practice to store our Database connection string in web.config file. This avoids hard coding and you can always change as and when required but the problem is anybody who has access to solution can see the user name and passwords of the database and can be changed. To avoid this kind of problem Microsoft provides a tool called aspnet_setreg.exe this will help us to encrypt connection string or any other information you want to be secured.
Aspnet_setreg.exe is tool which will create an encrypted registry entry for your values; you can encrypt any of the web.config key values using this tool.
How to do? For example your database connection string is DSN=test;uid=test;pwd=test; like this then you web.config entry would be.
[source:c#]
<configuration>
<appSettings>
<add key=” DSN=test;uid=test;pwd=test” />
</aapSetting>
</configuration>
[/source]
But in above entry you can always see the user name and password of the database.
Now we will see how to use aspnet_setreg.exe.
First of all you need to create a registry entry using aspnet_setreg.exe
Example to create:
aspnet_setreg.exe -k:SOFTWARE\Your_Service\SessionState -c:sqlConnectionString=”DSN=test;uid=test;pwd=test;”
After executing the above command, you can see a registry entry under LOCAL_MAHCINE\SOFTWARE\Your_Service\SessionState
Now the entry in web.config should be like following
<appSettings>
<add key=”sqlConnectionString”
value=”registry:HKLM\SOFTWARE\Your_Service\SessionState\ASPNET_SETREG,sqlConnectionString” />

Getting back the Connection String
Now we will see how to get the back the connection string in our project, now we are aware that our connection string encrypted and stored in the registry so to use the data retrieved from registry should be decrypted for that we need a third party free DLL called NCrypto, the source of this DLL you can download from the following location http://sourceforge.net/projects/ncrypto/ this DLL provides the function using which you can decrypt your data.
Sample code
[source:c#]
using NCrypto.Security.Cryptography;
private string DecryptString()
{
string pConnectionString = ConfigurationSettings.AppSettings _
["sqlConnectionString"]
string sSqlConnectionString = null;
try
{
char[] cSeparators = {‘:’, ‘,’};
byte[] registryBytes;
//Split the value using : character
string[] regKeyPath = pConnectionString.Split(cSeparators[0]);
//Get the Registry Key path
regKeyPath = regKeyPath[1].Split(cSeparators[1]);
RegistryKey regkeyHive = Registry.LocalMachine;
RegistryKey regKey = regkeyHive.OpenSubKey(regKeyPath[0].Replace(“HKLM\\”, “”));
//get the encrypted value
registryBytes = (byte[])regKey.GetValue(regKeyPath[1]);
//Decrypt the value into Connection String
sSqlConnectionString = Encoding.Unicode.GetString( _
ProtectedData.Unprotect(registryBytes));
sSqlConnectionString = sSqlConnectionString.Replace( _
regKeyPath[1] + “=”, “”);
}
catch(Exception e)
{
}
return sSqlConnectionString;
}
[/source]
This function get the connection string from registry and decrypted and ready for use.
Important
After creating registry entry using aspnet_setreg.exe you need to give read permission to Network Service user for the registry entry \Registry\Machine\Software\Your_Service\SessionState\ASPNET_SETREG, open regedit and navigate to this entry and give read permission to Network Service user.
Conclusion
This can be best practice to implement security in web.config file this will help us keeping our information secured.

Where do you store connectionstring ?

<ConnectionStrings>
add name=""
ConnectionString="datasource="servername";catalog="database
name";
<ConnectionStrings>
and that code is to written under web.config  file.

importance of config files in .net

In this .NET Interview questions interviewer expects two things. First the importance of configuration and second in which scenarios are the above file applicable. So lets answer the question in two parts.
The importance of config files==================================================
App.config, web.config and machine.config are files which store configuration data in XML
format. We need configuration data at application level or at machine/server level.
Scenarios in which the above config files are used===================================================
Machine.config file stores configuration information at system level. It can contain configuration information like timeout in ASP.NET application, requestLimit, memoryLimit, and ClientConnectedCheck etc.
Generally we have two kinds of application web application and windows application. Web.config file stores configuration data for web applications and app.config file store configuration information for windows application.
Application level configuration data can be like connection strings,security etc.
Some connected questions from this question which the interviewer can ask================================================================================
Where do you store connectionstring ?.
How do you encrypt connection string?
How do you read configuration data from web.config or app.config file in .NET ?
Can we have two web.config files for a web application.

Virtual Directory Permission Breafly

IIS Virtual Directories and File Permissions for Security and Patch Manager
Virtual Directory Properties and Permissions Virtual Directory Name: incomingdata
Directory on disk: Program Files\LANDesk\ManagementSuite\incomingdata\
Virtual Directory Properties: Read, Directory Browsing, Log Visits, Index this resource
Directory Security: Anonymous (IUSR_Servername)
Permissions: Administrators – Full Control ASP.NET Machine Account - Full Control Internet Guest Account - Read & Execute, List Folder Contents, Read LANDesk Management Suite - Full Control Launch IIS Process Account – Read & Execute, List Folder Contents, Read Network Service - Full Control System – Full Control Virtual Directory Name: LdLogon
Directory on disk: Program Files\LANDesk\ManagementSuite\ldlogon\
Virtual Directory Properties: Read, Directory Browsing, Log Visits, Index this resource Directory Security: Anonymous (IUSR_Servername)
Permissions: (These permissions also apply to the agentbehaviors subdirectory under LDLOGON) Administrators – Full Control ASP.NET Machine Account - Full Control Everyone - Read & Execute, List Folder Contents, Read Internet Guest Account - Read & Execute, List Folder Contents, Read LANDesk Management Suite - Full Control Network Service - Full Control System – Full Control
Virtual Directory Name: WSVulnerabilityCore
Directory on disk: Program Files\LANDesk\ManagementSuite\WSVulnerabilityCore\
Virtual Directory Properties: Read, Directory Browsing, Log Visits, Index this resource
Directory Security: Anonymous (IUSR_Servername)
Permissions: Administrators – Full Control ASP.NET Machine Account - Read & Execute, List Folder Contents, Read LANDesk Management Suite - Full Control Network Service - Read & Execute, List Folder Contents, Read System – Full Control
Directory Permissions Directory Name: Program Files\LANDesk\ManagementSuite\LDLogon\VulscanResults
Permissions: Administrators – Full Control ASP.NET Machine Account - Full Control Everyone - Read & Execute, List Folder Contents, Read Internet Guest Account - Full Control LANDesk Management Suite - Full Control Network Service - Full Control System – Full Control Directory Name: Program Files\LANDesk\ManagementSuite\LDLogon\VulnerabilityData
Permissions: Administrators – Full Control ASP.NET Machine Account - Full Control Everyone - Read & Execute, List Folder Contents, Read Internet Guest Account - Read & Execute, List Folder Contents, Read LANDesk Management Suite - Full Control Network Service - Full Control System – Full Control

Virtual Directory Permission

This step-by-step article describes how to grant Web server permissions for Web content using Internet Information Services (IIS) 5.0.

You can grant Web server permissions for specific Web sites, folders, and files on your server. Unlike the NTFS file system permissions that apply only to either a specific user or a group of users that have a valid Windows account, Web server permissions apply to all users that access your Web site regardless of their specific access rights.

Web access permissions use the IUSR_computername account by default. When you install IIS, the IUSER_computername account is created and used as the default anonymous user account. When you enable anonymous access, IIS uses the IUSER_computername account to log on all users who access your site.

The IUSR_computername account is granted NTFS permissions for the folders that make up the Web sites on your server. However, you can change the permissions for any folder or file in your site. For example, you can use Web server permissions to control whether visitors to your Web site are allowed to view a particular Web page, upload information, or run scripts.

When you configure both Web server permissions and Windows NTFS permissions, you can control how users access your Web content on multiple levels, from the entire Web site to individual files.

You can assign strong NTFS permissions for your resources. The NTFS file system is more secure than the FAT or FAT32 file system. You can also assign the most restrictive Web permissions possible. For example, if the Web site is used only for viewing information, assign only Read permissions. If a directory or site contains applications, assign Scripts only permissions instead of Scripts and Executables permissions. Do not assign Write and Script source access permissions or Scripts and Executables permissions. Use this combination with extreme caution. It could allow a user to upload potentially harmful executable files to the server and run them.
Back to the top
How to grant Web server permissions for Web content
Start Internet Services Manager. Alternatively, start the IIS snap-in.
Click to expand * server name, where server name is the name of the server.
Right-click either the Web site, the virtual directory, the folder, or the file for which you want to grant permissions, and then click Properties.
Click one of the following tabs that is appropriate to your situation:
Home Directory
Virtual Directory
Directory
File

Either click to select or click to clear any of the following check boxes (if present) that are appropriate for the level of Web permissions that you want to grant:
Script Source Access: Grant this permission to allow users to access source code. Script Source Access includes source code for scripts, such as scripts in Active Sever Pages (ASP) programs. Note that this permission is only available if you grant either the Read or the Write permissions.

NOTE: When you click Script Source Access, users may be able to view sensitive information, such as a user name and a password, from scripts in an ASP program. They are also able to change source code that runs on your server, which can seriously affect the security and the performance of your server. It is recommended that you handle access to this type of information and to these functions using individual Windows accounts and higher-level authentication, such as integrated Windows authentication.
Read: Grant this permission to allow users to either view or download files or folders and their associated properties. Read permissions are selected by default.
Write: Grant this permission to allow users either to upload files and their associated properties to the enabled folder on your server or to change the content or properties of a write-enabled file.
Directory browsing: Grant this permission to allow users to view a hypertext listing of the files and the subfolders in the virtual directory. Note that virtual directories are not displayed in folder listings; users must know a virtual directory's alias.

NOTE: An "Access Forbidden" error message is displayed by your Web server in a user's Web browser if the user attempts to access either a file or folder on your server and both of the following conditions are true:
Directory browsing is disabled.

-and-
The user does not specify a file name such as Filename.htm in the Address box.
Log visits: Grant this permission to log visits to this folder in a log file. A log entry is recorded only if logging is enabled for the Web site.
Index this resource: Grant this permission to allow Microsoft Indexing Service to include this folder in a full-text index of the Web site. When you grant this permission, users can perform queries on this resource.
In the Execute Permissions box, chose a setting to determine how you want scripts to be run on the site. The following settings are available:
None: Click this setting if you do not want users to run scripts or executable programs on the server. When you use this setting, users can gain access only to static files such as Hypertext Markup Language (HTML) and image files.
Scripts only: Click this setting to run scripts such as ASP programs on the server.
Scripts and Executables: Click this setting to run both scripts such as ASP programs and executable programs on the server.
Click OK, and then quit Internet Services Manager or quit the IIS snap-in.
NOTES:
When you try to change security properties for a Web site or virtual directory, IIS checks the existing settings on the child nodes (virtual directories and files) that are contained within that Web site or virtual directory. If the permissions that are set at the lower levels are different, IIS displays an Inheritance Overrides dialog box. To specify which child nodes should inherit the permissions that you set at the higher level, click the node or nodes in the Child Nodes list, and then click OK. The child node inherits the new permissions settings.
If Web permissions and NTFS permissions differ for either a folder or a file, the more restrictive of the two settings is used. For example, if you grant Write permissions to a folder in IIS, and grant Read permissions to a particular user group in NTFS, those users cannot write files to the folder because Read permissions are more restrictive.
If you disable Web server permissions (for example, Read permissions) on a resource, all users are restricted from viewing that resource, regardless of the NTFS permissions that are applied to those users' accounts. If you enable Web server permissions (for example, Read permissions) on a resource, all users can view that resource unless NTFS permissions that restrict access to it are also applied.

Difference Between MySQL vs. SQL Server

When considering a utility for data management the two most popular choices are MySQL and SQL Server. Both are efficient at keeping your data organized and readily available through a user interface, but they differ drastically in many areas. In the following discussion we will review the major differences in detail to help clear the air about the most common question asked in data management discussions: "Which is better, MySQL or SQL Server?"
Some key questions to ask yourself in determining which data management tool is best suited for your needs are:
  • How flexible do you need your data to be?
  • How secure must your data be?
  • Do you need support in utilizing your data management tools?

Standard Language Support

An important aspect of database development is how to access your data using different standard based protocols. Both MySQL and SQL Server do a good job of supporting all major protocols for accessing their respective databases. A standard protocol is the "programming language" used for a program to communicate to a SQL database. The most common protocol is called tSQL, or Transact SQL. Transact SQL is a series of statements that a program can use to access data and create new tables in a SQL database. The statements can INSERT new records, DELETE old records and UPDATE existing records, along with a myriad of other functions.
The international standard for this programming language is referred to as ANSI SQL. ANSI stands for American National Standards Institute. ANSI is a collective governing body that determines what the standard language will be. For example, ANSI decides such things as the word INSERT will be used in the programming language as apposed to the work ADD to insert a new record into a database.
While both databases do a good job of supporting all major protocols, one of the most frequent complaints about MySQL is that it does not completely follow the ANSI SQL standard. This would not necessarily be any issue if your need for data storage never outgrows the capability of MySQL. However, if you data is constantly growing, there may be an eventual need to upgrade from MySQL to SQL Server to handle the vast amounts of data. In a situation like this, any application that has been written to manipulate a MySQL database will have to be revisited to be sure that the program is able to "communicate" with the new data store.
To give a better idea of the differences in MySQL and SQL Server, MySQL is geared more toward selecting data so is can be displayed, updated and saved again. MySQL is weaker in the areas of inserting and deleting data. MySQL is an excellent choice, for other reasons that we will discuss later on, for data storage and referencing data. MySQL is not the best choice however, for creating, inserting and deleting data from tables on the fly.
For an Internet application, MySQL would be a good choice for tracking clients and creating dynamically populated pages with information from the database. For an application of moderate to large scale that is used for commerce of any kind, SQL Server is by far the better choice with more options and functionality to add, manipulate, update and delete data.
Following are some specific technical differences in MySQL and SQL Server when it comes down to the ANSI SQL standard:
  • MySQL does not support Triggers, SQL Server does.
  • MySQL does not support User Defined Functions, SQL Server does.
  • MySQL does not have Cursor Support, SQL Server does.
  • MySQL does not support Stored Procedures, SQL Server does.
  • MySQL does not have FULL JOIN capabilities, SQL Server does.
  • MySQL has limited Import/Export capabilities, SQL Server has full support for Import/Export.
  • MySQL has limited Transaction Support while SQL Server offers full Transaction Support.
So, it becomes clear that SQL Server is an obvious choice for the company or individual that needs to have FULL control over their data to manipulate and change it as they need.
The following table shows a more complete comparison of the differences between MySQL and SQL Server:
Feature
MySQL 4.1
SQL Server 2000
Notes
SQL VIEW support
SQL VIEWS let administrators abstract database designs away from developers.
Triggers
Lack of triggers makes MySQL developers add extra logic to their front end and middle tier when the logic should go into the database.
Stored Procedures
Stored procedures are mechanisms for abstraction and security
User Defined Functions
User Defined Functions (UDFs) allow encapsulation of complex code into simple callable interfaces.
CURSOR Support
Lack of CURSOR support in MySQL increases network traffic and lowers app response time.
SQL SELECT
MySQL supports using a regular expression as a filter clause in a query.
XML Support
XML is a standards-based format for data. MySQL has no native support for XML.
FULL JOIN
MySQL does not support FULL JOIN
Referential Integrity
MySQL 4.0 supports referential integrity (RI) so long as the InnoDB table type is chosen.
Transaction Support
MySQL´s default installation does not support transactions. Transaction support requires InnoDB.
Full Text Support

Import/Export Support

Replication Support

Auto Tuning

Database Management Tools

Query Analysis Tools

Job Scheduling

Profiling

Online backup support

Clustering Support

Log Shipping

Support for Storage Area Networks (SANs)

Hot Backups / Incremental backups

Basic Security

Security Certifications

Security Tools

Security through stored procedures & views

OLAP Services

Data Mining

Data Reporting

Security Support

Another major concern to business today is security. While data management can seem to be a mundane process at times, securing critical data from "the outside world" is an ever increasing and trying task.
While is good to know that your database management system (MySQL or SQL Server) utilize security features it is very important to know that the application has been verified by a third party. SQL Server has been certified as C2 compliant, which ensures that the database has adequate security support for government applications.
Along with C2 certification, Microsoft Baseline Security Analyzer helps administrators ensure that their SQL Server installations are up to date with the most current patches and security features. MySQL has no equivalent tool to protect and ensure the same confidence in their platform.
While both MySQL and SQL Server support security measures within in their platforms, MySQL supports security via the SQL GRANT command. The MySQL GRANT command is limited to granting security at the table level. This means that if any portion of data in the table needs to be secure from any particular user, then the entire table has to be secured from that user. SQL Server supports security at the column level, meaning that any portion of data in a table can be secured from any particular user while allowing that same user to see other portions of the table data.
SQL Server also makes their database more secure by abstracting its data behind a layer of stored procedures. This ensures that developers never see how the actual data is represented. My SQL, as noted above, does not support stored procedures.

Data Protection

Another security aspect is the protection from loss of data due to power failure. Database administrators in large environments employ the concept of "warm standby servers". This is an additional server containing duplicate data that, by normal use, is not accessed until an event such as a power failure of the main server. Standby servers are kept in a ready state using a concept known as log shipping. Log shipping takes a regularly scheduled backup (sometimes at intervals of less than 10 seconds) of a database and stores that data on the duplicate standby server´s hardware.
MySQL lacks the capability to perform such a backup transaction in an online, or "live" environment. This feature is standard to SQL Server and includes tools to help the database administrator manage the details of the log shipping.

Support and Service

When deciding on how you want to manage your data, you must also consider what kind of support you can expect to receive from the manufacturer of your data management application. After all, if you can´t figure out how to make it work for you, you can´t make any use of your data.
MySQL and SQL Server both offer support plans that are available from their respective vendors. Both applications offer some free support as well as some paid support options and plans.
MySQL claims to employ about 100 people worldwide, which makes up the entire company consisting of product development, support staff, sales staff, distribution, and clerical workers. With such a small staff dispersed over a wide area of expertises, this fact may worry some database administrators.
SQL Server, from Microsoft, has a significantly large support program and large staff backing the program.

Pricing!

Now, let´s move on to the biggest issue of them all. Pricing! Let´s start with MySQL, how does FREE sound? Well, while MySQL is an open source database management tool, it does have its retractions on that $0.00 price! MySQL can be used, for free, for any purpose whether its used to manage a large companies data or to spice up a hobbyist´s web site.
SQL Server on the other hand, could take a small independent company´s budget and eat it for lunch! However, the many outstanding features that are far and away above MySQL offset the price paid for SQL Server drastically.

Conclusion

Now that we have taken a good look at the defining differences between MySQL and SQL Server, its plain to see that the final decision will come down to your needs. How flexible do you need your data to be? How secure must your data be? Do you need support in utilizing your data management tools? These are all questions that you will have to answer for yourself. And in answering them, you will know which data management tool will be best suited for your personal, business or corporate needs.
From a database developer´s standpoint, the choice is clear. SQL Server is the most sensible choice because of its rich features in manipulating, securing and managing data. Also, from a developer´s stand point, MySQL´s lack of support for the basic database features mean that development of an application to interface with the database will be both more costly and take longer to finalize. Too much code must be written in the user interface to manipulate the data first, before sending that data to the database. All this extra code costs time and money to develop and maintain.