European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

European Windows 2012 Hosting - France :: Look Further Windows 2012 Powershell 3.0

clock September 24, 2013 08:34 by author Scott

Now that Windows 8 and Windows Server 2012 is available, the same is true for Windows PowerShell 3.0 since it is included in the operating system. Windows PowerShell will also be available for down level operating systems (Windows 7, Windows Server 2008 and Windows Server 2008 R2) shortly, as part of the Windows Management Framework (WMF). In addition to PowerShell, new versions of Windows Management Instrumentation (WMI) and Windows Remote Management (WinRM) is included in the WMF.

What is new?

PowerShell 2.0 brought a whole set of new features including background jobs, remoting and the PowerShell ISE. In PowerShell 3.0 there have been made a great number to these features as well as many new ones. I will go through some of the major news:

Workflows – Based on the Windows Workflow Foundation the PowerShell team have brought workflows into PowerShell. A workflow is a sequence of automated steps or so called activities which performs tasks or receives data from managed devices. This makes it possible for IT Professionals to perform automated tasks against a wide variety of devices, for example software installation. A practical example is the installation and configuration of a Windows Server Failover Cluster, where installation and configuration can be orchestrated from a workflow. Among the feature set of a workflow is the ability to suspend and resume execution, no matter if the reason is planned or a temporary network outage. You can see examples and read more about this feature in this article on the PowerShell team blog.

Enhancements to PowerShell Remoting - Robust sessions is a new feature in PowerShell Remoting which makes it possible for a PowerShell Remoting session, a so-called “PSSession”, to survive a temporary network outage. Delegated administration is another new feature in remoting, where a RunAsAccount can be configured on a remoting endpoint. This makes it possible to delegate tasks to for example helpdesk user, without needing to delegate tasks on the backbone application itself.

Simplified syntax – Especially for beginners, the syntax for various parts of PowerShell might be hard to remember and understand. An example of this is the syntax for the –FilterScript parameter of Where-Object and the –Process parameter of Foreach-Object, which both accepts a so-called script block. In version 1.0 and 2.0 of PowerShell we had to use the $_.propertyname syntax inside this scriptblock. For example Get-Service | Where-Object {$_.Status –eq ‘Running’}. In version 3.0, this still works, but there is an alternate more user friendly syntax as well: Get-Service | Where-Object Status –eq ‘Running’. Here we can see that we did not have to use the curly brackets or the $_. syntax. You should note that you have to use the existing syntax if you are doing more than one comparison, however, this makes it much easier for beginners who are likely to do a single comparison in the beginning. Also experienced users will enjoy this feature since it requires less typing.

More user friendly – A lot of enhancements have been made to make PowerShell more user friendly. A common mistake for new users is not loading the required module for the cmdlet they want to run. For example, if you run Get-ADUser without first running Import-Module ActiveDirectory, you would get an error message stating that Get-ADUser is not recognized. In PowerShell 3.0 there is a new feature called module autoloading, which automatically loads the required module for the cmdlet which is being run. Another features in terms of user friendliness is the new cmdlet Show-Command, as well as the Intellisense feature in PowerShell ISE. You can read more about these two features in this and this article on the PowerShell team blog.

Windows PowerShell Web Service – makes it possible to expose a set of PowerShell cmdlets as a Restful Endpoint via OData (Open Data Protocol). This makes it possible to run PowerShell cmdlets from both Windows and non-Windows devices. Note that this feature is more targeted against advanced users and developers.

Windows PowerShell Web Access – If you have used Microsoft Exchange Server`s webmail functionality, OWA, this feature will look familiar. The sign in page for PowerShell Web Access looks very similar to the OWA sign in page. When logged in, you will be presented with a PowerShell session. This makes it easy to use PowerShell both from a web browser on your computer as well as from mobile devices such as an Iphone or Windows Phone. Note that this feature requires Windows Server 2012. You can find instructions on how to configure this feature in this article on Microsoft TechNet.

Updateable help – Until PowerShell 3.0 the help files that is parsed when you are using the Get-Help cmdlet has been a part of the installation. Updating these files have not been possible, since rolling out help files through the channels for updating the operatingsystem (Windows Update, WSUS) could not be justified. Due to this reason, it was not possible for the PowerShell team to correct errors and enhance the help files after the product had shipped. To overcome this limitation, a new feature named updateable help has been added in version 3.0. There is a new cmdlet called Update-Help you can execute in order to update the help files. If you need to download the files in order to bring them over to a computer not connected to the internet you can use the Save-Help cmdlet. You can read more about updateable help in this article by PowerShell MVP Don Jones.

Microsoft Script Explorer – Technically this is not a part of PowerShell 3.0, but rather a standalone download released in the same timeframe as PowerShell 3.0. Using Script Explorer you can search for scripts and other resources on both Microsoft TechNet as well as 3rd party repositories and local UNC-paths, for example a company repository. Script Explorer can either be run as a standalone application or integrated into the PowerShell ISE as an add-on. By integrating it to the ISE you can copy scripts you find directly in to the editor. Script Explorer will also support Windows PowerShell 2.0.

In addition to the above mentioned features, there has been made a great number of bug fixes and enhancements based on feedback from Microsoft Connect.



European Windows 2008 R2 Hosting - Spain :: How to Determine I/O Usage on the Server?

clock September 18, 2013 11:29 by author Scott

Have you ever imagine what process that consume I/O usage on our server? I searched on on forums but I cant find definitely the answer. I still doubt about the answer. I try to find myself and make some experience.  In this post, I will show some analysis how can we determine the I/O usage on our server.

The easiest way to get a quick view into your I\O usage is to use Task Manager.  I will show you how to find that information.  First you need to open up Task Manager.  To do this, right click the task bar and choose Start Task Manager.  When you have it running, click the Processes tab.  Then from the menu choose View -> Select Columns… 

This will bring up the Select Process Page Columns.  Scroll down to the bottom and put a check beside I/O Reads, I/O Writes, I/O Read Bytes, and I/O Write Bytes. 

Click OK when you are finished and the columns will be added to Task Manager.  You should note that the numbers listed in Task Manager are totals for each of those items since the last boot of the system. 

Here is a description of the columns of the counters that were added to Task Manager above.

I/O Reads -  The number of read input/output operations generated by a process, including file, network, and device I/Os. I/O Reads directed to CONSOLE (console input object) handles are not counted.

I/O Writes – The number of write input/output operations generated by a process, including file, network, and device I/Os. I/O Writes directed to CONSOLE (console input object) handles are not counted.

I/O Read Bytes – The number of bytes read in input/output operations generated by a process, including file, network, and device I/Os. I/O Read Bytes directed to CONSOLE (console input object) handles are not counted.

I/O Write Bytes – The number of bytes written in input/output operations generated by a process, including file, network, and device I/Os. I/O Write Bytes directed to CONSOLE (console input object) handles are not counted.

You can look at the new columns that are showing up in Task Manager and see the processes that have used the most I/O since the last reboot of the server.  Often this is all the information you need to narrow down the top I/O usage per process.

Sometimes using Task Manager is not enough to help you narrow down the usage.  Maybe you need to know what is using the I\O right now or want to paint a picture of I\O usage over the next 7 days.  That is a good job for Performance Monitor.  Before we look at how to add the counters to Performance Monitor, here is the mapping of the Task Manager Columns that were described above and their corresponding Performance Monitor Counters:

Task Manager

Performance Monitor

I/O Reads

Process\I/O Read Operations/sec

I/O Writes

Process\I/O Write Operations/sec

I/O Read Bytes

Process\I/O Read Bytes/sec

I/O Write Bytes

Process\I/O Write Bytes/sec

To start up Performance Monitor click the Start button.  In the 'Search programs and files' text box enter perfmon. 

Select perfmon that shows up in the results to open up Performance Monitor.  Once you have it open, select Performance Monitor from the left menu so that it is highlighted.  That will start allowing Performance Monitor to collect information.  Next click the green Plus icon to add a counter. 

When the Add Counters dialog window comes up, look for the Process object and click the plus to the right of it.  That will expand the counters that are under the Process object.  Based on the information that you got from Task Manager above, you can now add the appropriate counter.  You will notice that when you select a counter it also wants to know if you want the _Total (the total amount from all processes for that counter), 'all instances' which will add all the instances to Performance Monitor, or a specific process.  I recommend that you set the counter up for one or more specific processes that you previously found with the total highest I/O listed in Task Manager.  Choosing 'all instances' will make it hard to read the results and might cause some additional resource usage on your server. 

Using this information you should now be able to track down what is using the most I/O on your system.

NOTE:  The step by step instructions in this article are based on Windows 2008 R2.  The I/O information pertains to earlier Operating Systems as well but the actual steps might be different.



European SQL 2012 Hosting - Germany :: EXECUTE Statement Using WITH RESULT SETS in SQL 2012

clock August 16, 2013 07:06 by author Scott

Microsoft SQL Server 2012 extends the EXECUTE statement to introduce WITH RESULT SETS option which can be used to change the Column Name and Data Types of the result set returned by the execution of stored procedure.

 

Example Using WITH RESULT SETS Feature of SQL Server 2012

Let us go through an example which illustrates WITH RESULT SETS Feature of SQL Server 2012.

Use AdventureWorks2008R2
GO

IF
EXISTS (

SELECT
* FROM sys.objects    
WHERE object_id = OBJECT_ID(N'[dbo].[WithResultSets_SQLServer2012]')   
AND type in (N'P', N'PC'))
DROP
PROCEDURE [dbo].[WithResultSets_SQLServer2012]
GO

CREATE PROCEDURE WithResultSets_SQLServer2012
AS
   
BEGIN      
SELECT                 
 TOP 5                 
                                                 PP.FirstName + ' ' + PP.LastName AS Name            
,PA.City               
,PA.PostalCode         
FROM  Person.Address PA            
INNER JOIN                   
Person.BusinessEntityAddress PBEA                          
ON PA.AddressID = PBEA.AddressID               
INNER JOIN                         
Person.Person PP                         
ON PBEA.BusinessEntityID = PP.BusinessEntityID       
ORDER BY PP.FirstName
      END

GO

Once the stored procedure is created successfully. The next step will be to execute the above stored procedure using WITH RESULT SET Feature of SQL Server 2012.

/* Execute Stored Procedure which uses WITH RESULT SETS  Feature of SQL Server 2012*/
EXEC WithResultSets_SQLServer2012GO
/*
 Example - Using WITH RESULT SETS Feature of SQL Server 2012
*/

EXEC
WithResultSets_SQLServer2012
WITH
RESULT SETS
(

 
(
  [Employe Name]  NVARCHAR(100),
  [Employee City]       NVARCHAR(20),
  [Employee Postal Code]      NVARCHAR(30)
 )
)

GO

In the above image you could see that once you execute WithResultSets_SQLServer2012 stored procedure using WITH RESULT SET feature of SQL Server 2012 you can change the Column Name and Data Type as per your need without actually altering the exisiting stored procedure. In the second result set (above image) you could see that the Column Names are changed from Name to Employee Name, City to Employee City and PostalCode to Employee Postal Code. Similary, the data type was changes from VARCHAR to NVARCHAR.

Conclusion

The WITH RESULT SET Feature of SQL Server 2012 is a great enhancement to the EXECUTE Statement. This feature will be widely used by Business Intelligence Developers to execute a stored procedure with in an SQL Server Integration Services (SSIS) Package to return the result set with required Columns and modified data types.



European Windows 2012 Hosting - Amsterdam :: Creating Storage Pool Windows Server 2012

clock August 15, 2013 07:58 by author Scott

Storage Spaces, the Windows Server 2012 storage subsystem, is a storage virtualization platform that allows fast and easy provisioning of storage pools, and the virtual hard disks that they host.

This article provides an in depth look at how to create a storage pool on Windows Server 2012, using both the PowerShell Cmdlets and the Storage Manager GUI tools.

Before you can create a storage pool on your Windows Server 2012 computer, you need to add some storage to it. This can be either SAS or SATA drives, installed either internally or externally, such as a JBOD or a SAN array.

Here are the steps to create a storage pool from the Management GUI

1. Open Server Manager, then select “File and Storage Services.”

2. Select “Storage Pools” from the left side menu.

Then select “New Storage Pool” from the Tasks actions list.

3. Click Next on the “Before you begin” dialog.

4. Name your storage pool.

5. Select physical drives to add to the storage pool.

6. Click “Create” on the confirmation dialog box. If you want to create a Virtual Disk immediately, there is a checkbox to bring up the New Virtual Disk wizard on the results screen. Click “Close” to complete the storage pool.

Now, the next step we need to create a storage pool with powershell

As seen in the management GUI, there is not much information that is required to create a storage pool.

The three things that are required are:

1. The storage pool name
2. Which disks to use to create the pool
3. The storage subsystem (Storage Spaces)

The cmdlet we use to create the storage pool is New-StoragePool. While the only three things that are required are name, disks, and subsystem, New-StoragePool also provides some other more advanced features.

The name of the storage pool will be passed through the “FriendlyName” parameter.

The disks to create the storage pool on will be passed into the New-StoragePool in the “PhysicalDisks” parameter. Which disks are available is found by using the Get-PhysicalDisk cmdlet, and can be made even easier using the “-IsPooled” parameter (which will either provide all of the disks that are already pooled, or if set to false will return all of the disks not already in a pool. The Get-PhysicalDisk cmdlet can be run as part of the –PhysicalDisk parameter, or can be run previously and the results stored in a variable. If creating a script that will be reused, it’s advisable to use a variable, so that it is easier to read and understand.

“#Inline, as typed in at the console (incomplete – it would still need the storage subsystem)

New-StoragePool –PhysicalDisk (Get-PhysicalDisk –IsPooled $false) –FriendlyName “Pool1”

#Easier to read and understand in a saved script

$disks = Get-PhysicalDisk –IsPooled $false

New-StoragePool –PhysicalDisk $disks –FriendlyName “Pool1”

The storage subsystem in this case is looking for the “Storage Spaces” instance of storage subsystem. It is returned in the Get-StorageSubsystem cmdlet. In the New-StoragePool cmdlet, it is passed in as either the unique ID, the name, or the friendly name of the subsystem. For simplicity, it is helpful that New-StoragePool accepts the storage subsystem to create the storage pool on through the pipeline.

#This uses the $env:computername environment variable to provide the Storage Spaces subsystem.
#If only one subsystem is installed on the system

$Disks = Get-PhysicalDisk –IsPooled $false

Get-StorageSubsystem –FriendlyName “Storage Spaces on$env:computername” | `

New-StoragePool –Friendlyname “Pool1” –PhysicalDisk $Disks

That is everything that is needed to create a basic storage pool. However, these optional parameters for New-StoragePool may provide some benefit.

ResiliencySettingsNameDefault – specify the default resiliency on new Storage Spaces created on the storage pool.
ProvisioningTypeDefault – specify the default provisioning type for new Storage Spaces created on the storage pool.
IsEnclosureAware – Used if the enclosure containing the disks supports SCSI Enclosure Services. SCSI Enclosure Services provides extra information such as slot location, and LEDs on the enclosure.

To take advantage of Storage Spaces, the storage virtualization technology in Windows Server 2012, you first need to add storage to your server. Once the storage has been added, it needs to be grouped together in storage pools. The storage pools are used to store the virtual hard drives on them.

Create Storage Pools with the Management GUI and PowerShell

Storage pools can be created either through the management GUI or through PowerShell. The management GUI is easier if you are not familiar with the commands used to create storage pools. However, once familiar with the commands, PowerShell becomes easier and faster to create storage pools.

To use the management GUI for creating storage pools, you access "Storage and File Services" from Server Manager. From there, you can access the storage pools, and can take actions on them such as create new storage pools, delete storage pools, or rename them. There are fewer options available for creating storage pools from the management GUI. For example, you cannot specify the default VHD provisioning type on the storage pool when it’s created with the management GUI.

To use PowerShell for creating the storage pools, you need to use three cmdlets:

- First, you need to get the storage subsystem using the get-storagesubsystem cmdlet.
- Second, you need to find the disks you will use to create the storage pool using the get-physicaldisk cmdlet.
- Finally, you will use the storage subsystem and the physical disks together as parameters in the new-storagepool cmdlet.

You can use more options for creating the storage pools by using PowerShell, and you can also save the script to use on multiple systems if needed.

Now that you've added the storage pools to your Windows Server 2012, you'll be ready to add storage spaces onto them and you'll officially be using storage virtualization!



European Windows 2012 Hosting - Amsterdam :: Clustered Shared Volumes (2.0) in Windows Server 2012

clock August 5, 2013 12:09 by author Scott

Clustered Shared Volumes was first introduced in Windows Server 2008 R2, and was almost as popular as sliced bread by the time. A great enhancement, and it was solely meant for Hyper-V virtual machines.

Instead of using a dedicated LUN for each VM (so that you could migrate them between cluster nodes without taking down the other VMs on the same LUN) as in Windows Server 2008, you had now the possibility to store multiple VMs on the same LUN by converting it to CSV.

CSV is a distributed file access solution that let multiple nodes in a cluster to access the same file system simultaneously.

This means that many VMs can share the same volume, while you can failover, live migrate and move VMs without affecting the other virtual machines. This leads to better utilization of your storage since you don’t have to place VMs on separate disks, and the CSV’s are not depending in disk letters so you can scale this configuration out, if you’d like.

What’s the latest and greatest related to CSV 2.0:

- Windows Server 2012 has brought some changes to the architecture, so there’s now a new NTFS compatible file system, which is called CSVFS. This means that applications running on a CSV are able to discover this, and leverage this. But still, the underlying file system is NTFS.

- BitLocker Support is added to the list, which means you can secure your CSVs on a remote location. The Cluster Name Object is used as the identity to decryption and you should include this in every cluster deployment you are doing, because the performance penalty are less than 1%.

- Direct I/O for data access which gives enhancements for virtual machine creation and copy operations.

- Support for other roles than Virtual Machines. There’s an entirely new story around SMB in Windows Server 2012, and CSV is also affected by this. You can now put a SMB file share on top of your CSVs, which makes it easier to scale out your cluster storage, to share a single CSV among several clusters, where they will access their shares instead of volumes. Just a reminder: You can run Hyper-V virtual machines from a SMB file share in Windows Server 2012. This requires that both the server and the client is using SMB 3.0.

- The marriage to Active Directory has come to an end. External authentication dependencies, which you would run into if you started your cluster without an available AD is now removed. This gives us an easier setup of clusters, with less trouble and dependencies.

- File backup by supporting requestors that’s running Windows Server 2008 R2 or 2012. You can use application consistent and crash consistent VSS snapshots.

- SMB support with multichannel and direct. CSV traffic can now stream across multiple networks in the clusters and utilize the performance in your NICs that supports RDMA.

- Integration with storage spaces (new in Windows Server 2012) so that you can leverage your cheap disks (just a bunch of disks, JBOD) in a cluster environment

- Maintenance by scanning and repairing volumes with no downtime

Although there’s several enhancement for VM mobility in 2012, where you can move VMs without shared storage, there are still significant benefits by clustering your Hyper-V hosts.

 



European Windows 2012 Hosting - Amsterdam :: Windows Server 2012 Hyper-V Replica

clock July 29, 2013 07:30 by author Scott

For small businesses, there’s often no easy answer when it comes to disaster recovery. With many storage and DR options out of reach, finding an affordable way to protect data can be challenging. But in Hyper-V Replica, Microsoft offers a new path to DR for many SMBs.

Usage Scenarios

Hyper-V Replica, one of many operational and performance improvements to Hyper-V in Windows Server 2012, allows users to replicate virtual machines over slow, unreliable network links. Replica is purely a DR solution and doesn’t replace clustering, but it can be used to replicate clusters of Hyper-V servers.

With Replica, SMBs can set up a DR site at a branch office (or at a remote site hosted by a service provider) and use Hyper-V to replicate the contents of selected VMs at 5-minute intervals.

Unlike most enterprise-grade replication technologies, Hyper-V Replica does not require a reliable, high-bandwidth network link between primary and DR sites. It’s designed to work with links that don’t run over dedicated lines, such as broadband or asymmetric digital subscriber line (ADSL) technologies, which may be subject to short outages.

The Replication Process

To begin, Hyper-V Replica creates an initial copy of the primary server. That file is then copied over, either manually or over the network, to the Replica server at the recovery site. Replica then writes to a log file (Hyper-V Replica Log, or HRL) on the primary server every time there’s a write operation. This HRL file is replicated on the Replica server and is replayed every 5 minutes, in reverse, replicating only changes made since the last replay.

Due to the 5-minute replication frequency, if the primary server goes offline, up to 10 minutes of data might be lost. If Hyper-V can’t replicate at the 5-minute interval, an alert is written to the event log. If replication fails for more than 30 minutes, the Replica server is put into a failed state and replication must be manually restarted — so it’s possible that an hour of data could be lost at the DR site.

The replication schedule is designed to keep the state of the operating system consistent after a failover to the Replica server. If it’s critical that applications remain consistent, Hyper-V Replica can be set to create additional Volume Shadow Copy Service (VSS) recovery points for this purpose.

Ensuring Security with Hyper-V Replica

Before implementing Hyper-V Replica, Windows Firewall must be configured to allow the necessary replication traffic to pass through. This applies as well to any other firewalls that separate the corporate intranet from the DR site. Replica comes with default Windows Firewall rules, but they must be enabled manually in the Windows Firewall with Advanced Security management console. There are two inbound rules:

  1. Hyper-V Replica HTTP Listener (TCP-In)
  2. Hyper-V Replica HTTPS Listener (TCP-In)


The HTTPS rule is needed only if certificate-based authentication is used. Hyper-V in Windows Server 2012 also includes a new simple authorization model in which a Hyper-V Administrators group is created on each server that hosts the Hyper-V role. Hyper-V Administrators can configure Replica servers to accept incoming connections from specific Hyper-V servers.

There are two different types of authentication that Hyper-V Replica uses. Kerberos, a network authentication protocol, is the default option, but certificate-based authentication allows replication between primary and Replica servers to be encrypted.

Integrated authentication uses the Kerberos protocol and can be established between trusted Active Directory domains. In the absence of Active Directory, X.509v3 certificates can be used instead of Kerberos. Replication Authorization Tags protect Replica VMs residing on a single server in a data center, which is useful for hosting service providers that need to stop clients from gaining access to Replica VMs other than their own.

Installing Hyper-V Replica

Replica works in both workgroup and domain scenarios, and Replica servers don’t need to be in the same AD domain as primary servers. The first step is to configure a Replica server in a branch office by adding the Hyper-V role. This is done either with Server Manager or by using PowerShell. Once the role is installed, the server must be restarted. The following PowerShell command will install the Hyper-V role and management tools:

Install-WindowsFeature -Name Hyper-V –IncludeManagementTools

Now open Hyper-V Settings in the Actions pane of Hyper-V Manager, click on Replication Configuration and select Enable this computer as a replica server in the Details pane. Choose an authentication method, and don’t forget to enable the default Windows firewall rules. In the Authorization and storage section, the system can be set to allow replication from any authenticated server or from a specific list of servers.

VMs in the primary site can be enabled for replication with a simple wizard. In Hyper-V Manager, select a VM in the Details pane, right-click the VM and choose Enable Replication. The wizard allows a choice in how to perform the initial replication, which by default happens over the network. That may not be feasible or desirable over a slow network, so there’s also the option to perform the first replication using external media and then import the data at the Replica site.

Failover

Hyper-V has a new section in the configuration option for each VM called Failover TCP/IP, where both alternative IPv4 and IPv6 addresses can be provided should the DR site have a completely different IP addressing scheme than the primary site. These addresses are used when a failover action is performed.

There are three kinds of failover:

  • Test Failover
  • Planned Failover
  • Unplanned Failover

A Test Failover allows Hyper-V administrators to perform a failover without disrupting the primary server.

A Planned Failover should involve no data loss; the primary server will temporarily become the Replica server for the VM that has failed over. The VM on the primary server needs to be shut down beforehand, and the primary server must be configured as a Replica server.

An Unplanned Failover is not expected and is initiated from the Replica server at the DR site. A VM on the Replica server will be started from a Replica recovery point. This could result in some data loss.



European SQL 2012 Hosting - Amsterdam :: SQL 2012 Always On Hosting on Cloud

clock July 17, 2013 10:45 by author Scott

It has traditionally been almost impossible to architect infrastructure-class, highly available (HA) SQL Server solutions using shared storage in the public cloud. Recently Microsoft announced support for some System Center 2012 SP1 applications to work with SQL AlwaysOn, a new way to achieve HA SQL. AlwaysOn uses an availability group concept, much like Exchange 2010 database availability groups (DAGs) to achieve clustered HA services without shared storage.

This is good news for architects looking to move management workloads into the public cloud when appropriate. In theory, a pair of powerful VMs in Azure running SQL 2012 AlwaysOn can approach and exceed the HA SLAs expected of many mission critical applications. Figure 1 shows the new dashboard view of AlwaysOn HA availability groups with some System Center 2012 SP1 databases made highly available.

Figure 1

HA SQL is Important. What is the Reason?

Highly available (HA) SQL Server services are the cornerstone of many enterprise database applications. Few enterprise solutions today are deployed on non-HA SQL. Without an HA mechanism for database services, enterprise applications and e-commerce websites cannot offer maximum available uptime. Even perfectly managed servers require periodic restarts for updates and maintenance; and you always need to be prepared for equipment failure such as extended outages of particular servers or disk drives.

The traditional way to offer HA SQL is by creating a SQL Server failover cluster based on shared storage. That is, a storage area network (SAN) presents shelves of disk drives to two or more servers at the same time ("shared storage"). The SAN and shared storage is often the most expensive component in the datacenter. Public cloud solutions abstract you from the storage, and usually don't offer the kind of infrastructure you would need to run a conventional HA SQL failover cluster with shared storage in the cloud.

SQL Server 2012 AlwaysOn, no need for SAN

With SQL Server 2012 AlwaysOn, two or more complete copies of each HA database can exist, synchronized by AlwaysOn technology. The independent database copies are presumed to exist locally in direct attached storage (DAS) on each SQL node, or over on the network using economical Windows Server 2012 SMB 3.0 file shares on dedicated Windows Server 2012 file servers.

- You can use Windows Server 2012 Standard for the SQL server nodes-unlike in previous Windows releases, you can enable the failover cluster feature in the Standard edition of Windows Server 2012 as well as Windows Server 2012 Datacenter.

- SQL Server 2012 does require the Enterprise edition of SQL Server 2012 to use the AlwaysOn feature.

Steps to deploy a SQL Server 2012 AlwaysOn Availability Group

Here are the high-level steps to follow to deploy SQL 2012 AlwaysOn in a SQL 2012 failover cluster (without shared storage).

1. Install Windows Server 2012 in two computers or virtual machines (VMs), each with a single network interface card (NIC).

2. Create a two-node failover cluster without shared storage. You will need a cluster name and an IP address for the cluster network name.

3. Install SQL Server 2012 Enterprise on both computers as if they were going to be stand-alone SQL servers.

  • When you install, use a domain account for the SQL server services.
  • Open the Windows Firewall on ports TCP 1433 and TCP 5022.

4. Create a temporary "seed" database on the first SQL server using SQL Server 2012 Management Studio. This database will be used to establish the AlwaysOn cluster, and then can be deleted after the first production database is deployed.

5. Make sure the database is of the "Full" type model, and perform a SQL Backup job.

6. In the Management Studio, create an AlwaysOn Availability Group and an Availability Group Listener. (The Availability Group Listener is essentially the virtual (or clustered) SQL Server instance. There is a one-to-one relationship between availability groups and listeners.)

  • Assign a DNS name and TCP IP address for the AlwaysOn Availability Group and an Availability Group Listener.
  • Assign a shared network folder that is accessible to all SQL servers that will have AlwaysOn database replicas.

7. At the AlwaysOn High Availability node, right-click and select Add A Database To An Availability Group. If your database is of the Full type and has been backed up, the status will be "Meets Requirements". Click Next.

8. Select that you will perform a full synchronization, using the shared network folder you specified in step 6(b). Click Next.

9. Enter security information to access the primary database replica. Click Next, observe the validation and click Next, and then Finish.

10. Observe after a moment that new database replica on the secondary node in the SQL AlwaysOn availability group has been created, as seen in Figure 2.

Figure 2



European SQL 2008 R2 Hosting - Amsterdam :: Generated LINQ to SQL, SQL Server Profiler, and Provide Final T-SQL Generated

clock July 5, 2013 07:42 by author Scott

If you’re interested in taking a look at the final T-SQL generated by your LINQ to SQL queries then you have a few options available:

  1. Utilise LINQPad which converts your C# LINQ database queries into T-SQL
  2. Execute an SQL Server Profiler trace against the database your application is querying while its running
  3. Understand how ADO.NET translates the LINQ to SQL expression tree into T-SQL so you know what it’s going to output

So first off I’ll provide a brief overview of SQL Server Profiler and what a “trace” is. Then I’ll show you how to configure and run a trace. Finally I’ll throw a few LINQ to SQL examples at a local SQL database and provide the final T-SQL generated.

SQL Server Profiler & Traces

SQL Server Profiler comes packaged with SQL Server so if you’re running SQL Server Management Studio chances are you’ll find a shortcut to this particular tool in amongst the SQL shortcuts, or via the Tools menu option in SQL Server Management Studio. Its primary aim is to analyse, and even replay SQL trace results, for the purpose of troubleshooting, diagnosis, benchmarking, monitoring, etc. As a developer you can imagine how helpful this can be when you want to identify bottlenecks, optimise your system during the testing and performing tuning phase, generate performance reports, etc.

Once you delve into examining SQL events you’ll start to notice that SQL generates a LOT of unique events (well over a hundred) and in order to help isolate the ones you’re looking for you have to create what’s called a trace. The trace defines which SQL events to capture as well as providing very handy filtering and output options.

The trace results are also returned in a tabular fashion which you can browse through within the tool or export for use elsewhere.

Creating a Trace

Start up SQL Server Profiler. The first screen you’ll see will hopefully look like this:

Click on “File” in the menu and select “New Trace…” then select the appropriate connection settings.

You’ll then be presented with a fresh “Trace Properties” window from where you’ll be able to configure your trace.

Leave everything as is on the “General” tab with the exception of the “Use the template:” option which you want to change to “Blank”. This essentially clears the events the trace is going to capture so that we can start from scratch and only add those we want. There are a lot of preconfigured templates but we want to avoid those for now.

Now click on the “Events Selection” tab and you’ll see an array of events all grouped accordingly. We want a specific event called “RPC:Completed” which is under the “Stored Procedures” category.

Select the checkbox to the left of “RPC:Completed”. This tells our trace that we want to capture all events that are fired when a remote procedure call completes it’s execution.

One small but useful tip is to organise how the data will be returned so click on the “Organize Columns…” button and reorder the columns so the “StartTime”, “Duration” and “TextData” are at the top. This means we’ll have our trace results returned in chronological order and each will tell us how long they took to execute in milliseconds and what T-SQL was executed in the first few columns. See below:

Click “OK” to save the column order then click “Run” to proceed.

Congratulations you just created a trace!

Reviewing the Trace for LINQ to SQL queries

Now that the trace is up and running we’ll start to see all stored procedure calls that are completed against any database associated with the connection you chose earlier.

If you fire up a Windows C# application that makes LINQ to SQL calls you’ll start to see them appearing in the trace output. What’s important to clarify at this point is that even if your LINQ to SQL queries aren’t calling a stored procedure the system stored procedure sp_executesql will be executing the queries, and that’s what the trace will be capturing.

See below for an example:

What I’ve selected is a simple LINQ to SQL insert into a table called “ExceptionLog”, and here’s the C# code that generated this call:

var exceptionLog = new ExceptionLog
{
    ExceptionType = exceptionType,
    Message = message,
    AddedDateTime = DateTime.UtcNow
}; 

using (var dataContext = GetDataContext())
{
    dataContext.ExceptionLogs.InsertOnSubmit(exceptionLog);
    dataContext.SubmitChanges();
}

You can use the Pause/Start buttons to halt the trace if need be so you don’t find SQL Server Profiler auto-scrolling to the bottom of the list every time a new procedure is completed.

Here’s another example of a simple LINQ to SQL select:

var query =
    (
        from el in dataContext.ExceptionLogs
        where el.AddedDateTime.Year == 2012
        orderby el.AddedDateTime ascending
        select new ExceptionLog
        {
            ExceptionLogId = el.ExceptionLogId,
            ExceptionType = el.ExceptionType,
            AddedDateTime = el.AddedDateTime
        }
    ).ToList();

And in the trace window we’ll see:

exec sp_executesql N'SELECT [t0].[ExceptionLogId], [t0].[ExceptionType], [t0].[AddedDateTime]
FROM [dbo].[ExceptionLog] AS [t0]
WHERE DATEPART(Year, [t0].[AddedDateTime]) = @p0
ORDER BY [t0].[AddedDateTime]',N'@p0 int',@p0=2012

You can see the exact SQL output. More information is displayed above as well so you can get an idea for what other data is associated with the trace; duration in milliseconds, DatabaseName, RowCounts etc. All very useful.

Here’s another example where I’ve employed the SQL IN operator to retrieve a collection of authorised users. The LINQ to SQL:

var authorisedUserRoleIds = new List
{
    1, // 1 = System Administrator
    2, // 2 = Regional Manager
    3 // 3 = County Manager
}; 

var query =
    (
        from u in dataContext.Users
        join ur in dataContext.UserRoles on u.UserId equals ur.UserId
        where authorisedUserRoleIds.Contains(ur.RoleId)
        orderby u.Surname ascending
        orderby u.FirstName ascending
        select new User
        {
            UserId = u.UserId,
            FirstName = u.FirstName,
            Surname = u.Surname,
            EmailAddress = u.EmailAddress
        }
    ).ToList();

And the resulting SQL:

exec sp_executesql N'SELECT [t0].[UserId], [t0].[FirstName], [t0].[Surname], [t0].[EmailAddress]
FROM [dbo].[User] AS [t0]
INNER JOIN [dbo].[UserRole] AS [t1] ON [t0].[UserId] = [t1].[UserId]
WHERE [t1].[RoleId] IN (@p0, @p1, @p2)
ORDER BY [t0].[FirstName], [t0].[Surname]',N'@p0 int,@p1 int,@p2 int',@p0=1,@p1=2,@p2=3

If you look closely you can see the JOIN, WHERE  clause and ORDER BY  just as you’d expect.

What Next?

This is really a starting point and there is a lot more that can be achieved with SQL Server Profiler further. Some examples being:

If you want to edit your running trace you’ll have to pause it, select the properties option to edit it then get it running again – “File > Properties”.

Another useful event under the SQL group is SP:StmtCompleted. SP:StmtCompleted returns every single statement completed whereas RPC:Completed only returns the last remote procedure executed. So if you called a stored procedure which contained five individual queries from your LINQ to SQL you’d see five SP:StmtCompleted events and one RPC:Completed.

Save the trace you’ve created as a template so that you can reuse it again – “File > Save as Template“. You can even save it into the list of templates that appears in “New Trace” window for easier access or share it amongst your development team.

Apply filters to the trace in the “Events Selection” tab. See the “Column Filters…” button. Useful if you want to identify queries taking longer than 10 milliseconds or just target a given database.

Look into the Database Engine Tuning Advisor tool and how it can analyse trace outputs to help create indexes to boost performance.



MySQL Tutorial :: Backup Data Using mysqldump Utility in MySQL

clock June 25, 2013 07:38 by author Scott

The mysqldump client is a backup program o It can be used to dump a database or a collection of databases for backup or transfer to another SQL server (not necessarily a MySQL server). The dump typically contains SQL statements to create the table, populate it, or both. However, mysqldump can also be used to generate files in CSV, other delimited text, or XML format.

To take dump of a mysql table use below command
-------------------------------------------------------
~/mysql/bin/mysqldump -u root  database_name ORDERS > orders.txt

1.  command to dumb  only tables definitions , not the data “ use –d command”  if path is not set then you need to run that command from mysql/bin directory

./mysqldump -d -u root database_name  ORDERS , CLIENTS , COMPANY  > ~/tmp/test.database.sql

2. Command to recreate table from that file

mysql -u root database_name < ~/tmp/test.database.sql

Easy, right?

 



European SQL Hosting - Amsterdam :: SQL Injection? How to Prevent It?

clock June 19, 2013 08:10 by author Scott

This article talk about what SQL injection is, how can that effect the security of our websites and what steps should be taken to create an ASP.NET application SQL injection proof. SQL injection is the attack in which the user of the website will input some SQL code as input which would result in creating a SQL statement that developers didn't intend to write. These SQL statements could result in unauthorized access, revealing secret user information and sometimes it could even wipe out the entire data lying on the server.

 

Getting to know SQL Injection

Let us take this discussion a little further by looking into the bad coding practices that will make the application prone to the SQL injection attacks. Let us create a simple table that contains username and password of the user for authentication.

Now I will create a small page that lets the user to enter his login credentials and get them validated against the Users table.

Note: Password should never be stored in plain text. This table contains password in plain text just for the sake of simplicity of this article.

The actual code that I will use to authenticate the user contains dynamic SQL that is being created by concatenating strings. This code will return true if the userid and password are found in the database otherwise false.  

public bool IsUserAuthenticated_Bad(string username, string password)
{
    DataTable result = null;
    try
    {
        using (SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["SampleDbConnectionString1"].ConnectionString))
        {
            using (SqlCommand cmd = con.CreateCommand())
            {
                cmd.CommandType = CommandType.Text;
                cmd.CommandText = "select userID from Users where userID = '" + username + "' and password = '" + password + "'";               

                using (SqlDataAdapter da = new SqlDataAdapter(cmd))
                {
                    result = new DataTable();
                    da.Fill(result);


                    //check if any match is found
                    if (result.Rows.Count == 1)
                    {
                        // return true to indicate that userID and password are matched.
                        return true;
                    }
                }
            }
        }
    }
    catch (Exception ex)
    {
        //Pokemon exception handling
    }

    //user id not found, lets treat him as a guest       
    return false;
}

For all the normal users this code will work fine. I can even test it using userid as sampleuser and password as samplepwd and this will work fine. For any other data except this it should say that authentication failed(since this is the only record in the table). The query that will get generated to test this input will be:

select userID from Users where userID = 'sampleuser' and password = 'samplepwd'

Now let us try to inject some SQL into this page. Let me give hacker' or 1=1-- as username and anything in the password(even leave it empty). Now the resultant SQL for this will become:

select userID from Users where userID = 'hacker' or 1=1--' and password = ''

Now when we execute this query the 1=1 clause will always return true(and the password check is commented out. Now irrespective of whatever data user has entered this will SQL return a row making this function return true and in turn authenticating the user. So What I have done now is that I gained access to the website even when I didn't knew the valid user credentials.

How can I curb this problem is something we will look into details in some time. But before that let us also look at one more example of SQL injection just to get little more understanding.

In this second example we will assume that the malicious user somehow got hold of the database schema and then he is trying to manipulate the application to find some confidential information. Lets say we have a page that is supposed to show all the products that are assigned to a user in the organization.

Let us start by looking at the Product table.

Let us now look at the code that is retrieving this data:

public DataTable GetProductsAssigner_Bad(string userID)
{
    DataTable result = null;
    try
    {
        using (SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["SampleDbConnectionString1"].ConnectionString))
        {
            using (SqlCommand cmd = con.CreateCommand())
            {
                cmd.CommandType = CommandType.Text;
                cmd.CommandText = "select * from Products where AssignedTo = '" + userID + "'";

                using (SqlDataAdapter da = new SqlDataAdapter(cmd))
                {
                    result = new DataTable();
                    da.Fill(result);
                }
            }
        }
    }
    catch (Exception ex)
    {
        //Pokemon exception handling
    }

    //user id not found, lets treat him as a guest       
    return result;
}

Now if I call this function with the proper data(as normal users would do) then this will show me the results. i.e. If I call this page for sampleuser the resulting query would be:

select * from Products where AssignedTo = 'sampleuser'

Now let me use this query string with this page: userID=' UNION SELECT 0 AS Expr1, password, userID FROM Users -- . Once this data is used with the current code this will show me all the username and passwords from the database. The reason will be quiet clear once we look into the resulting query of this input.

select * from Products where AssignedTo = '' UNION SELECT 0 AS Expr1, password, userID FROM Users --

Now we saw that how string concatenated dynamic SQL is prone to SQL injection. There are many other problems that could be created by injecting SQL. Imagine a scenario where the injected SQL is dropping tables or truncating all the tables. The problem in such cases would be catastrophic.

How to Prevent SQL Injection

ASP.NET provides us beautiful mechanism for prevention against the SQL injection. There are some thumb rules that should be followed in order to prevent injection attacks on our websites.

  • User input should never be trusted. It should always be validated
  • Dynamic SQL should never be created using string concatenations.
  • Always prefer using Stored Procedures. 
  • If dynamic SQL is needed it should be used with parametrized commands.
  • All sensitive and confidential information should be stored in encrypted.
  • The application should never use/access the DB with Administrator privileges. 

User input should never be trusted. It should always be validated

The basic thumb rule here is that the user input should never be trusted. First of all we should apply filters on all the input fields. If any field is supposed to take numbers then we should never accept alphabets in that. Secondly, All the inputs should be validated against a regular expression so that no SQL characters and SQL command keywords are passed to the database.

Both this filtration and validation should be done at client side using JavaScript. It would suffice for the normal user. Malicious users cans till bypass the client side validations. So to curb that all the validations should be done at server side too.

Dynamic SQL should never be created using string concatenations.

If we have dynamic SQL being created using string concatenations then we are always at the risk of getting some SQL that we are not supposed to use with the application. It is advisable to avoid the string concatenations altogether.

Always prefer using Stored Procedures.

Stored procedures are the best way of performing the DB operations. We can always be sure of that no bad SQL is being generated if we are using stored procedures. Let us create a Stored procedure for the database access required for our login page and see what is the right way of doing the database operation using stored procedure.

CREATE PROCEDURE dbo.CheckUser     
      (
      @userID varchar(20),
      @password varchar(16)
      )
AS
      select userID from Users where userID = @userID and password = @password
      RETURN

And now lets have a good version in our code using this stored procedure.

public bool IsUserAuthenticated_Good(string username, string password)
{
    DataTable result = null;
    try
    {
        using (SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["SampleDbConnectionString1"].ConnectionString))
        {
            using (SqlCommand cmd = con.CreateCommand())
            {
                cmd.CommandType = CommandType.StoredProcedure;
                cmd.CommandText = "CheckUser";
                cmd.Parameters.Add(new SqlParameter("@userID", username));
                cmd.Parameters.Add(new SqlParameter("@password", password));

                using (SqlDataAdapter da = new SqlDataAdapter(cmd))
                {
                    result = new DataTable();
                    da.Fill(result);

                    //check if any match is found
                    if (result.Rows.Count == 1)
                    {
                        // return true to indicate that userID and password are matched.
                        return true;
                    }
                }
            }
        }
    }
    catch (Exception ex)
    {
        //Pokemon exception handling
    }

    //user id not found, lets treat him as a guest       
    return false;
}

If dynamic SQL is needed it should be used with parametrized commands.

If we still find our self needing the dynamic SQL in code then parametrized commands are the best way of performing such dynamic SQL business. This way we can always be sure of that no bad SQL is being generated. Let us create a parametrized command for the database access required for our Product page and see what is the right way of doing the database operation.

public DataTable GetProductsAssigner_Good(string userID)
{
    DataTable result = null;
    try
    {
        using (SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["SampleDbConnectionString1"].ConnectionString))
        {
            using (SqlCommand cmd = con.CreateCommand())
            {
                cmd.CommandType = CommandType.Text;
                cmd.CommandText = "select * from Products where AssignedTo = @userID";
                cmd.Parameters.Add(new SqlParameter("@userID", userID));

                using (SqlDataAdapter da = new SqlDataAdapter(cmd))
                {
                    result = new DataTable();
                    da.Fill(result);
                }
            }
        }
    }
    catch (Exception ex)
    {
        //Pokemon exception handling
    }

    //user id not found, lets treat him as a guest       
    return result;
}

All sensitive and confidential information should be stored in encrypted.

All the sensitive information should be stored encrypted in the database. The benefit of having this is that even if somehow the user get hold of the data he will only be able to see the encrypted values which are not easy to use for someone who doesn't know the encryption technique used by the application.

The application should never use/access the DB with Administrator privileges.

This will make sure that even if the bad SQL is being passed to the Database by some injections, the database will not allow any catastrophic actions like dropping table.

Note: Refer the sample application attached to see the working examples SQL injection and how to curb them using parametrized commands and stored procedures.



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in