December 7, 2011 09:53 by
Scott
When you have automated backup jobs running on your database server, sometimes you forget that they are even running. Then you forget to check to see if they are running successfully, and don’t realize until your database crashes and you can’t restore it since you don’t have a current backup.
That’s where email notifications come in, so you can see the job status every morning when you are sipping your coffee and pretending you are working.
SQL Server provides a built-in method of sending emails, but unfortunately it requires you to have Outlook and a profile installed on the server, which isn’t necessarily the ideal way to send an email. Thankfully there is another method, that involves installing a stored procedure on your server that will allow you to send email via SMTP.
You will want to edit one line in the stored procedure to put the IP address of your SMTP server:
EXEC @hr = sp_OASetProperty @iMsg, ‘Configuration.fields(“http://schemas.microsoft.com/cdo/configuration/smtpserver”).Value’, ’10.1.1.10′
Install the stored procedure into the master database, so it can be easily used from wherever needed.
Open up the SQL Server Agent \ Jobs list, and select the properties for the job you are trying to create a notification for:
Click on the Steps tab, and you should see a screen that looks like this:
Click the New button to create a new job step. We will use this step to send the email notification on success.
Step Name: Email Notification Success
Enter this SQL into the Command window as seen below. You will want to customize the email addresses and message subject to match your environment:
exec master.dbo.sp_SQLNotify ‘[email protected]’,'[email protected]’,'Backup Job Success’,'The Backup Job completed successfully’
Click OK and then click the New button again to create another step. This will be the failure notification step.
Step Name: Email Notification Failure
SQL:
exec master.dbo.sp_SQLNotify ‘[email protected]’,'[email protected]’,'Backup Job Failure,’The Backup Job failed’
Now the idea is to make the items follow a specific workflow. First click Edit on step 1, and set the properties as shown here:
What we are saying is that on success, go to the success step, and on failure, go to the failure step. Pretty simple stuff.
Now edit the second step, the one labled “Email Notification Success”, and set the properties as seen here:
We are saying that if the notification job is successful, then just quit the job without running step 3. If we don’t specify this, then we will end up getting two emails, one with success and one with failure.
Now edit the third step, the one labled “Email notification failure”, and set the properties as seen here:
Now your job steps should look like this:
You should now have email notifications in your inbox for either success or failure.
November 22, 2011 06:01 by
Scott
Here is the source code to encrypt and decrypt <connectionString> section in Web.config using RSA Protected Configuration provider model.
Note that below source code is from class library file. It is quite feasible to edit the code and use within a button click event as you wish.
Hope you find it useful!!
using System;
using System.Collections.Generic;
using System.Text;
using System.Web;
//specific to configuration
using System.Configuration;
using System.Web.Configuration;
using System.Web.Security;
namespace WebUtils
{
/// <summary>
/// Contains methods for Encrypting and decrypting connectionStrings
/// section in web.config
/// current encryption configuration model is Rsa,
/// it is feasible to change this to DataProtectionConfigurationProvider
/// </summary>
/// <author>Raju Golla</author>
public class EncryptDecrypt
{
//Get Application path using HttpContext
public static string path = HttpContext.Current.
Request.ApplicationPath;
/// <summary>
/// Encrypt web.config connectionStrings
/// section using Rsa protected configuration
/// provider model
/// </summary>
#region Encrypt method
public static void EncryptConnString()
{
Configuration config = WebConfigurationManager.
OpenWebConfiguration(path);
ConfigurationSection section =
config.GetSection("connectionStrings");
if (!section.SectionInformation.IsProtected)
{
section.SectionInformation.ProtectSection
("RsaProtectedConfigurationProvider");
config.Save();
}
}
#endregion
/// <summary>
/// Decrypts connectionStrings section in
///web.config using Rsa provider model
/// </summary>
#region Decrypt method
public static void DecryptConnString()
{
Configuration config = WebConfigurationManager.
OpenWebConfiguration(path);
ConfigurationSection section =
config.GetSection("connectionStrings");
if (section.SectionInformation.IsProtected)
{
section.SectionInformation.UnprotectSection();
config.Save();
}
}
#endregion
}
}
References:
http://msdn.microsoft.com/en-us/library/ms998280.aspx
November 18, 2011 06:54 by
Scott
Developing database-driven websites using the ASP.NET Web Pages Framework, WebMatrix and Razor is not evil as most professional ASP.NET Web Developers might think. It is hard to look back on the 90's and Classic ASP websites with much optimism as ADO, millions of include files, and spaghetti code literally became the norm. Maintaining them was an utter nightmare in most cases.
This is a different time, however, and Microsoft.Data.dll is much better than ADO, the Razor Syntax is much better than VBScript, and a lot of the security concerns around Encoding, SQL Injection, proper validation, etc. are much more well-known and publicized than it was when Classic ASP first came on the scene. At the time, we also didn't have other frameworks, like WebForms and ASP.NET MVC, and other options for data access, like LINQ To SQL and Entity Framework, so we were pretty stuck making all kinds of crazy mistakes given there wasn't another avenue for more complicated websites.
WebMatrix, the ASP.NET Web Pages Framework with Razor, and Microsoft.Data.dll fit a particular need and development environment around simple, forms-over-data websites that are nothing more than a thin layer on top of a database. Personally, at this level, anything that attempts to abstract the fact that we are just pumping data from the database onto a web page is overcomplicating the issue and making the application more difficult to maintain.
As an example, take your typical Recent News or Recent Posts page on a personal website that literally pulls data from a single table based on either a publication date or id. I see no reason to create an ASP.NET MVC or WebForms website in Visual Studio with an O/R Mapper, Visual Designer, codebehind files, view model, a bloated web.config, and other nonsense when you can just take your favorite text editor and add a few lines of Razor syntax into a page in a few minutes with a much simpler deployment model.
This is far more easier to maintain than a similar application written with ASP.NET WebForms or ASP.NET MVC. Here we have 1 file that can easily be understood in any text editor with a clear understanding that this data is coming from a database using a particular query and dumped onto a web page. Want to sort it differently? Want to display more than 20 stories? It is pretty darn clear and pretty darn easy to make those changes and any changes that fit this simple, database-driven niche.
Other than those 3 lines at the top this looks pretty close to a view in the ASP.NET MVC Framework. If things get much more complicated and you need to offer a layer of abstraction and separate out concerns, you could move those 3 lines out of the view and migrate to MVC. I suspect such an application wouldn't grow to such a need, but if it does, there is a path to a more appropriate framework.
Also noticed that by default everything is HTML Encoded with less chance for JavaScript Injection. I have to work around the framework and use an HtmlString to display the body of the post to allow raw HTML to be displayed. We didn't have automatic encoding with Classic ASP :)
Although nothing is perfect and mistakes will be made by new developers, it won't be anything like the 90's and Classic ASP. This is good stuff for simple forms-over-data websites. As professional web developers I realize we worked long and hard to learn how to properly architect complex web applications, but I think the pendulum may have swung a bit too far and we forget the joy, ease, and beauty of developing simple websites using simple techniques and simple tools.
November 16, 2011 16:00 by
Scott
It is important to understand the reporting options available in SQL Server 2008 R2 to provide an appropriate report layout to meet the business requirements and needs of our end users. This post briefly discusses about different report layout types available and explains in detail the steps to create a Tabular Report. It also explains how to sorting and drilldown features to your report.
Different Available Report Layouts are
- Tabular Reports – these reports are defined using table data and organized in to rows and columns. The columns are typically fields in the table. Reporting detail rows can be grouped on various fields, each group can have header, footer , breaks and subtotals.
- Matrix Reports – To summarize the date for analysis we can use the Matrix Reports. It is useful for viewing the aggregated values with two different hierarchies(example time and geography).
- List Reports – List report consist of single rectangular area that repeats for every record or group value in the dataset.
- Chart Reports – these reports provide a visual context for different kinds of data. Some times complex information can be analyzed with very little explanation.
- Composite Reports – You can combine the reports that we discussed above for compelling the advanced reports.
Creating Tabular Reports
1. Open the Report Builder 3.0 , select the Insert Tab and click Table on the ribbon.
As we have not created a data set and data source, it will prompt you to add these objects. Use the DataSource Properties window to enter a name for the new data source as shown below
2. Enter the following query in query designer
1: Select CalendarYear,SalesTerritoryRegion, SalesAmount
2: FROM
3: FactResellerSales as F INNER JOIN DimDate as D ON
4: F.OrderDateKey = D.DateKey INNER JOIN DimSalesTerritory as ST
5: ON F.SalesTerritoryKey = ST.SalesTerritoryKey
6: order by CalendarYear , SalesTerritoryRegion
I am using the AdventureWorksR2 database to create this report.
3. Defining Table Groups – You can drop the database fields to zones to define groups located at the bottom of the designer pane.
In this example we are grouping the rows of this table based on CalendarYear and SalesTerritoryRegion Fileds. To add the CalendarYear field as a group above the detail row, drag and drop this field from the Data Window.
4. Drag the required fields to report designer as shown in the following window. As SalesAmount field is a currency type value and it should be formatted to show the values. To do so right click on the SalesAmount textbox and then choose properties
5. To look at the report that we built so far, you can click the run button in Report builder. Now the report is rendered as shown below
6. Switch back to the design view to add totals to the end of a group with same result. Right-click on the CalendarYear row, and choose Insert Row—>Inside Group – Below
The new row will be added below the group values. Hover the mouse over the new cell below the SalesAmount field and click on the field list icon as shown below
7. To see the changes in report, click on the run button to preview the report
8. Adding Sorting Feature to report – Any group can be sorted by any order using combination of data fields and expressions. For Example to set the sort order for the SalesTerritoryRegion Group click the group name under Row groups and select group properties you will see the following window
You can see the preview result in the following window
Creating Drill Down Reports – All report items and groups have a visibility property that can be set either permanently or conditionally. Common use is to create drill-down reports , where headers are used expand and collapse. Typically (+) or (-) sign will be displayed next to the column header.
To create a drilldown in this example, select CalendarYear Group from the lower part of the designer and click on group properties option then you should be able to get the following dialogue box
Set the display option to hide and select relevant textbox for the CalendarYear group by checking the Display toggle option. Click Ok to save these property changes.
November 4, 2011 06:07 by
Scott
Windows 2008 R2 has introduced a number of compelling features that would entice any Windows administrator to upgrade to, and the most welcomed feature in my own opinion would have to be the Active Directory Recycle Bin. Previous to the R2 upgrade, system admins and the like would have had to rely on paid 3rd party software that would take care of accidental deletions of users or even worst organizational units. Those who did not make the investment in 3rd party software would have had to rely on system state backups which is always a disruptive process in the event that you needed to perform an authoritative Active Directory restore.
Before we begin, we need to ensure that we have met the minimum requirements allowing you to enable the Active Recycle Bin. In summary, your Domain Forest Functional Level needs to be at least Windows 2008 R2. More information can be found in the following TechNet article; http://technet.microsoft.com/tr-tr/library/dd379484(WS.10).aspx
Now that we have met those requirements, we need to run the following command on the AD Domain controller where the Schema Master Resides. If you are not sure where the Schema Master role resides, follow the below TechNet article on How to view and transfer FSMO roles in Windows Server 2003.
On the Schema Master Domain Controller, run Start / Administrative Tools / Active Directory Module for Windows PowerShell.
Type in the following command;
N.B replace yourdomain.com with your own Active Directory domain name
Enable-ADOptionalFeature –Identity ‘CN=Recycle Bin Feature,CN=Optional Features,CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration, DC=yourdomain,DC=com’ –Scope ForestOrConfigurationSet –Target ‘yourdomain.com’
You will get a warning which you will need to confirm stating that enabling the Recycle Bin Feature is irreversible.
That’s it! The recycle bin will now begin capturing deletions of objects which will allow you to later restore them to their original or alternate location. Now you might be asking, how do we actually perform a restore? Well, I’m glad you asked, that’s the 2nd reason why you are reading this article right?!
Microsoft for some reason did not provide admins with a GUI in doing so, however there are FREE 3rd party tools that plug directly into the Recycle Bin feature that will provide you with an easy GUI for performing Active Directory object(s) restoration. Now before delving into MY current tool of choice, the following article by Microsoft explains how it is done via the command line; Restore a Deleted Active Directory Object http://technet.microsoft.com/tr-tr/library/dd379509(WS.10).aspx
My tool of choice (there are several out there) providing me with a graphical interface is PowerGUI in combination with their Active Directory Recycle Bin for PowerPack. You can download these from the links below;
Download the latest PowerGUI from http://powergui.org/downloads.jspa
Download the latest Active Directory Recycle Bin PowerPack from http://powergui.org/kbcategory.jspa?categoryID=46
Now that we have the relevant components, install PowerGUI and then import the AD Recycle Bin PowerPack via File / PowerPack Management / Import
Now as a test I have created a Test User account in Active Directory and then deleted the account a few minutes later. Lo and behold when I refreshed the Active Directory Recycle Bin node within the PowerGUI Navigation tree, my Test User was listed in the results pane.
From the Actions menu, you can easily restore the user to either its original location or alternate location. From the Actions menu you can also configure the recycle bin further via a GUI, and empty the recycle bin completely.
As you can see from the above, the Active Recycle Bin is a long awaited feature introduced with Windows 2008 R2 and with a front end like PowerGUI you can now easily and very quickly restore accidentally deleted Active Directory objects. Now I wonder if Microsoft will incorporate their own graphical interface in the near future.
November 1, 2011 07:38 by
Scott
“Track Change” one of the best interesting features in visual studio which indicates the code changes with a color indicator at the beginning of the line. Generally we know about the two color indicator “Green” and “Yellow” which are used indicting the color change till VS 2008 along with those VS 2010 introduced another new color “Orange” which indicates some additional track change for undoing file after save. In this blog post I am going to explain how those color indicator helps developers to track the code changes.
In Visual Studio 2010, there is three color indicator
Green color indicates the lines which you have edited before your last save. Save again the file and green mark will be disappear.
Yellow color indicates the lines which you have edited since the last save of that file. Yellow becomes Green after saving of the file. Once you close the file that indication disappears.
Orange color indication has newly introduced in VS 2010. This color will come when user does an undo after a save operation for that current file. Orange color indicates that current changed line is different from the saved version of the file.
How to Enable / Disable Track Change features ?
To enable or disable the “Track Change” features, Goto Tools > Options > TextEditor . In General section, you can checked or Unchecked the “Track Change” option
Key Note : While you are using “Track Change” option, you have to select “Selection Margin” display option other wise, “Track Change” Won’t work.
One Quick Flow of “Track Change”
You have opened one code file in visual studio 2010 which having below lines of code. By default there is not color indicator with line number.
Now, you have started editing the file Yellow indicator shows , you have made changes on those line.
When you are done with your changes, Save the file. Green indicator shows, all of your changes has been saved.
Now, you have undoing the files after save, orange indicator saying that, the lines is different than the save version. It means, you have changed something on original file, saved it and again undo it.
When you first undoing it, it will be showing as “Yellow”, which means you are editing with this line, but when you have done with all the undoing for that line which is different the saved version colors becomes “Orange”. Which means, it is different than the saved version.
To get a clear , just compare the below two image . Yes, all the orange color marked line are similar with default version image.
But, the Saved version of the file is
Similarly, This Track changes also works for config files also.
Summary : “Track Change” is one very useful features in Visual studio , by which You can see where you have edited / saved on a file for that current VS IDE state . Green color indicates the lines which you have edited before your last save.Yellow color indicates the lines which you have edited since the last save of that file. Orange color indicates that current changed line is different from the saved version of the file.
If you want to know more about editor setting options , please check the below link
How to: Set Text Editor Options
October 22, 2011 06:47 by
Scott
This is great for developers, testers, and anyone looking to learn Active Directory or deploy to a small network. If this is for a production deployment, you might want to bring in a professional to help you. There are many other things to consider, like ‘hardening’ your server and setting up Group Policy. Having an insecure or unprotected domain controller is inviting havoc on your network.
So without any further ado and in the immortal words of ‘Marv’, “Let’s get to it!”
In the Server Manager click on Add Roles.
Click next on the ‘Before You Begin’ screen if it shows. On the next screen, ‘Select Server Roles’, check the box for Active Directory Domain Services. After checking the box, you may receive a window that says you need to add required features, click the button marked Add Required Features.
Then back at the ‘Select Server Roles’ window, click Next. Here you can do some reading if you’re unfamiliar with Active Directory. There are links for an overview, installation instructions, and common configurations. There’s also some notes that say it is advisable to have at least 2 domain controllers, that you’ll need a DNS server, that you’ll have to run DCPROMO.exe, and informs you that you’re also installing DFS (Distributed File System), and some replication services tied to DFS.
Click Next and you’ll see the ‘Confirm Installation Selections’ window. Click the button marked Install.
The ‘Installation Progress’ window will appear letting you know what the system is doing. After a few minutes the ‘Installation Results’ window will appear. Click the link marked Close this wizard and launch the Active Directory Domain Services Installation Wizard (dcpromo.exe).
Another wizard will open, ‘Active Directory Domain Services Installation Wizard’. Click Next
Read the note on the next screen titled ‘Operating System Compatibility’. The link to the KB article 942564 underneath is (http://go.microsoft.com/fwlink/?LinkId=104751). Click Next. On the ‘Choose a Deployment Configuration’ screen, we’ll choose Create a new domain in a new forest for the purposes of this tutorial. If you’re attempting to add a domain controller to an existing domain / forest, you would choose the ‘Existing Forest’ checkbox. Click Next
Here’s where you input what you want your FQDN (Fully Qualified Domain Name) to be. Then click Next.
The system will confirm that the FQDN is not in existence already on your network, then allow you to choose your Domain NetBIOS name. After doing so, click Next. The system will then confirm that NetBIOS name is not in use.
On the next screen, you select what you want your forest functional level to be. You can choose: Windows Server 2003, 2008, or 2008 R2. In this tutorial we’ll be setting the forest functional level to Windows Server 2008 R2. If you’ll be connecting other DCs that are running Windows Server 2008 or 2003, then may will need to choose a compatible level. Click Next.
Now we’ll install the DNS server. Make sure that DNS server checkbox is checked, then click Next. Domain controllers, DCs, require Domain Name Services.
Click Yes at the next window, which is warning you that delegation cannot be configured for the parent zone. Don’t worry, there is no parent zone. Accept the default locations for your Database, Log Files, and SYSVOL folders, or change them if you really like. Click Next.
Input a password, twice, in the ‘Directory Services Restore Mode Administrator Password’ window. Then click Next. Review your selections and click Next.
The wizard will then install and configure Active Directory Domain Services and Directory Services on the DC. Click Finish, and select to Restart.
Congratulations, you’ve just done the basic setup for an Active Directory Domain Controller, and DNS support services on Windows Server 2008 R2. After the reboot, you can log into your server using the administrator account and password that was previously assigned to the local administrator account. NOTE: the password that you were using, is now assigned as your domain admin. It is advisable to make sure that password is STRONG.
October 5, 2011 06:48 by
Scott
In this tutorial, I will show you how to get column name and corresponding datatypes in particular table using SQL Server.
Description:
I have one table with lot columns in database at that time I tried to know the column names and corresponding datatypes and their maximum sizes in particular table for that I written the following query in SQL server to get column names and datatypes in particular table.
USE MySampleDB
GO
SELECT column_name 'Column Name',
data_type 'Data Type',
character_maximum_length 'Maximum Length'
FROM information_schema.columns
WHERE table_name = 'Country'
Demo
If anyone gets error with above query like
Invalid object name 'information_schema.columns'
This error because of case sensitive databases to rectify this error we need to write query like this
USE CRMK
GO
SELECT column_name 'Column Name',
data_type 'Data Type',
character_maximum_length 'Maximum Length'
FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Country'
Here you need to enter your table name if that is in Caps lock you should enter caps lock name of your table only because it it is case sensitive.
September 27, 2011 06:38 by
Scott
September 5, 2011 11:09 by
Scott
Error message when I try to restore a database backup to SQL Server 2008
Msg 3176, Level 16, State 1, Line 1 File 'D:\SQL Server 2008 DBs\test01.mdf' is claimed by 'SCTA_ORG_SECOND'(3) and 'SCTA_ORGANIZATION'(1). The WITH MOVE clause can be used to relocate one or more files.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally
Steps to fix this issue:
select Script Action to New Query Window as the above image so you will get the following T-SQL and here we will know the reason
RESTORE DATABASE [test01] FROM DISK = N'D:\SCTA_Org2.bak' WITH FILE = 1,
MOVE N'SCTA_ORGANIZATION' TO N'D:\SQL Server 2008 DBs\test01.mdf',
MOVE N'SCTA_ORG_SECOND' TO N'D:\SQL Server 2008 DBs\test01.MDF',
MOVE N'SCTA_ORGANIZATION_log' TO N'D:\SQL Server 2008 DBs\test01_1.ldf',
NOUNLOAD, STATS = 10
GO
so you will notice there are two files of mdf with the same name so just change the name of second one to test02 or to test01.ndf ( different extension) then run the command and it's successfully restored.
so the logical answer for this error first test01.mdf is a primary data file and the second is the secondary data file but the extension and name are same so that way you have to change the name or extension of second file with any other name or extension.
Note: the extension is anything ( it can be .fad or .ndf but .ndf is best practice to determine what this file for for example .ldf for log file , .ndf for secondary data files ..).
Finally : I think the original database backup come from SQL Server 2000 and maybe this behavior allowed in SQL Server 2000 or the name is a case sensitive ( test01.mdf not like test01.MDF).