European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

SQL Server 2021 Hosting - HostForLIFE :: SQL Server INFORMATION_SCHEMA Views

clock May 25, 2022 07:42 by author Peter

Have you ever come across a situation where you need to check first if a table exists? Or have you come across a scenario where you need to check if the table's column exists?  Then alter the table by adding your desired column. If you have answered yes to any of these two, you came to the right place.

This post will explore the INFORMATION_Schema views, learn what it is, and show you some of its common usages.

By the way, I'll be using SQL Server 2016, but it can also be applied with other older and later versions of SQL Server.
What is INFORMATION_SCHEMA Database?

The INFORMATION_SCHEMA stores other databases' details on the server.

It gives you the chance to retrieve views/information about the tables, views, columns, and procedures within a database.

The views are stored in the master database under Views->System Views and will be called from any database you choose.

Let's see a screenshot below,


Benefits
As a database developer or administrator, understanding schema and what tables are in a specific database gives us a good idea what's the underlying structure, which can help you write SQL queries. It also allows us to check if everything is expected or on the database. Moreover, it also helps us avoid running queries multiple times to see if the schema name or column name is correct.

Common Usages
Before we start, we'll be using the Northwind database for all our examples. You can download it from here. Now, here are the steps we're going to take. First, get all the databases on our local SQL Server instance, get all the tables, third, let's search for a column, fourth search for constraints, and the last query some views.
Showing All Databases

Let's try to show first all of the databases on a current server instance that we're in.

Note that my result will differ from yours, but you'll see all the databases on your SQL Server instance once you run the query below.
SELECT * FROM sys.databases;

EXEC sp_databases;


Output

The syntax on how we were able to show the database isn't directly related to INFORMATION_Schema.

However, it is an excellent start for us as we go through from database to tables to columns to keys.
Showing All Tables

Now that we have an idea of getting the database on a SQL Server instance.

In this section, we will try to get all of the possible tables of the Northwind database.
USE [Northwind];
--show all table
SELECT * FROM INFORMATION_SCHEMA.Tables;


Querying Column of a Specific Table
In this example, we'll explore how to check a table and then a column if it exists.

Then add a new column to the categories table. Then let's set Tags as its column name and set its data type as NVARCHAR(MAX).

Let's try to see a sample query below.
USE [Northwind];

/*
 * Let's try to declare some variables here that we can use later when searching for them.
 */
DECLARE @TABLE_NAME NVARCHAR(50);
SET @TABLE_NAME = 'Categories';

DECLARE @COLUMN_NAME NVARCHAR(50);
SET @COLUMN_NAME = 'Tags';

IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE
                    TABLE_NAME = @TABLE_NAME)
    BEGIN

        PRINT CONCAT('1. Categories table does exists.',
                    CHAR(13), '1.1 Let''s now create the Tags column.');

        IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS
                    WHERE TABLE_NAME = @TABLE_NAME
                    AND COLUMN_NAME = @COLUMN_NAME)
            BEGIN
                PRINT '2. Dropping the Tags column of the Categories table.';
                ALTER TABLE [Categories]
                DROP COLUMN [Tags];
            END

        BEGIN
            PRINT '3. Creating the Tags column of the Categories table.';

            ALTER TABLE [Categories]
            ADD [Tags] NVARCHAR(MAX);

            DECLARE @ADDED_COLUMNS NVARCHAR(MAX);

            SET @ADDED_COLUMNS = CONCAT('4. Categories table columns: ',
                                        (SELECT STRING_AGG(COLUMN_NAME, ',')
                                        FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = @TABLE_NAME));

            PRINT @ADDED_COLUMNS;
        END
    END
ELSE
    BEGIN
        PRINT 'CATEGORY TABLE DOESN''T EXISTS';
    END


Going to the example above, as you can see, we have to check if the Categories-table exists. Then from that, we did check if the Tags column does exist. If it does, we need to drop the existing column. And after that, we have re-created the Tags column. Lastly, we have shown all the Categories-table columns by a comma-separated list.

Just a note, the STRING_AGG function is a function that can be applied to SQL Server 2017 and later.

Output

Find Foreign Keys, Primary Key, and Check Constraint in a Table
Let's see how we can query the primary key, foreign key and check the constraint of a specific table.

In this case, we're just going to use the [Order Details] table and show the constraints such as primary key, foreign key, and check constraint.

Let's see the query below.
USE [Northwind];

SELECT CONSTRAINT_TYPE, CONSTRAINT_NAME
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS
WHERE CONSTRAINT_TYPE IN ('PRIMARY KEY', 'FOREIGN KEY', 'CHECK')
AND TABLE_NAME = 'Order Details';

Querying Views
In this last section, we'll make a simple query that will show the Views inside the Northwind database.

Let's see the query below.
USE [Northwind];

SELECT TABLE_CATALOG AS 'Database Name',
       TABLE_NAME AS 'View Name',
       View_Definition as 'Query'
FROM INFORMATION_SCHEMA.VIEWS
WHERE TABLE_NAME  LIKE 'Products%'

I hope you have enjoyed this article. I know that there are many ways you can use INFORMATION_Schema.

Moreover, this article has given you a jump start by showing you how to get all the databases, get all the tables, search for a column, search for constraints and search for views using the INFORMATION_Schema.

Once again, I hope you have enjoyed this article/tutorial as I have enjoyed writing it.

Stay tuned for more. Until next time, happy programming!

Please don't forget to bookmark, like, and comment. Cheers! And Thank you!



SQL Server 2021 Hosting - HostForLIFE :: Database Backup Using SQL Queries And Its Type

clock May 18, 2022 08:53 by author Peter

In this article, you will learn how to take backup of database using SQL queries in a compressed condition and in uncompressed condition.

Prerequisites
    Install SQL Server
    Install SQL Server Management Studio (SSMS)

For above prerequisites, you can follow the article that is How to Install SQL Server and SQL Server Management Studio (SSMS)
Backup of Database in compressed condition

Use the below query to take the backup of database in compressed condition.
BACKUP DATABASE Database_Name TO
DISK = 'BackupPath\BackupFileName.bak'
WITH COMPRESSION
GO

Backup of Database in uncompressed condition
Use the below query to take the backup of database in uncompressed condition.
BACKUP DATABASE Database_Name TO
DISK = 'BackupPath\BackupFileName.bak'
WITH NO_COMPRESSION
GO


Restore the database using backup file with SQL query
RESTORE DATABASE Database_Name
FROM DISK = 'BackupPath\BackupFileName.bak'
WITH RECOVERY
GO

RESTORE DATABASE Database_Name
FROM DISK = 'BackupPath\BackupFileName.bak'
WITH RECOVERY
GO


Now I will demonstrate the same with the help of an example.
Before we start we need to know where do we define the following items in the SQL query as shown in figure-1.
    Database name
    Backup folder location
    Backup file name
    Backup file extension

As I have a database named as "Company" now I am taking backup of this database in compressed condition as shown in figure-2.

Now I can see the Backup file is created successfully and saved at the defined location as shown in picture below

now I am going to delete the Database using Drop query as shown in picture below

Now restore the database using backup file as shown in a picture below:

Now I am going to create a backup of the same database "Company" in uncompressed condition as shown in a picture below:

Now I am going to create a backup of the same database "Company" in uncompressed condition

Now follow the same step as shown in figure-4 to delete/drop the database, then follow the steps as shown in figure-5 to restore the database using compressed backup file.

 

Difference between Compressed Backup and Uncompressed Backup

S. No.            Compressed Backup                       Uncompressed Backup           
1 It is very small in size as compared to uncompressed backup.            It is large in size as compared to compressed backup.           
2 It takes time to create backup file as compared to uncompressed backup.            It takes an exceedingly small time to create backup file.           
3 Its restoring time is very less.            It takes time to restore the backup file as compared to compressed backup.           
4 It saves about 75% of disk space as compared to uncompressed backup.            Its size is approx 4 times of compressed backup.           

HostForLIFEASP.NET SQL Server 2021 Hosting

 



SQL Server 2021 Hosting - HostForLIFE :: Import CSV File Into SQL Server Using SQL Server Management Studio

clock May 10, 2022 12:14 by author Peter

In this article, we learn how to import CSV files into SQL server using SQL server management Studio
    Log in to your database using SQL Server Management Studio.
    Right-click the database and select Tasks -> Import Data.

Then click the Next button.

From Data Source, select Flat File Source and then use the Browse button to select the CSV file.


Here is my sample CSV file that has some dummy data.

Then select Advanced, and as per your CSV data size is required to set column width.


Then click the Next button.

Destination select as SQL Server Native Client and Enter the Server name;
Enter SQL Server Authentication data and click the Next button.

Then click the Next button.

Then click the Next button.

Then click the Finish button.


CSV file data imported successfully in SQL database table. The table is created as per the CSV file name. Select the database and execute a query.




SQL Server 2021 Hosting - HostForLIFE :: Using For XML Clause In SQL Queries

clock May 9, 2022 10:21 by author Peter

XML acronym for eXtensible Markup Language. This markup language is much similar to HTML. It is designed to store and transport data. Moreover, the XML tag is not predefined, it is designed to be self-descriptive.

For XML Clause

For XML clause is used to convert SQL result set into XML format. It is a very much helpful clause when we need XML data from the SQL result set. The FOR XML clause can be used in top-level queries and in subqueries. The top-level FOR XML clause can be used only in the SELECT statement. In subqueries, FOR XML can be used in the INSERT, UPDATE, and DELETE statements. 

There are four modes of For XML Clause. Like

  • Raw 
  • Auto
  • Path
  • Explicit

Elaboration of Modes
 
Raw
Each row is turned into an XML node. That is called row by default, but you change the node name at any time by your own name. Every column will convert as an attribute.

select Id, Name, CatId from Product for xml raw

Now replace the default row node name with own custom name.

select Id, Name, CatId from Product for xml raw('product')

It is also possible to set a root element in this XML structure. Here is the same as before. The default root element name is root. But you can change the root element name.

select Id, Name, CatId from Product for xml raw('product'), root

Moreover, you can also set elements instead of attribute.


select Id, Name, CatId from Product for xml raw('product'), root, elements

Auto
Similar to raw mode, but here uses table name as default node name instead of row. Here everything is possible as in raw mode. Like the root, elements can be used.


select Id, Name, CatId from Product for xml auto

But in this mode, you can’t set a custom node name instead of the default table name.

Use of root and elements in auto mode.


select Id, Name, CatId from Product for xml auto, root('products'), elements

Difference between raw and auto mode
When you will retrieve data from multiple tables by joining then you will see the big difference between the two modes.

Raw mode query
select Category.Name as Title, Product.Name from Product
inner join Category on Product.CatId = Category.CatId
for xml raw, root

Auto mode query
select Category.Name as Title, Product.Name from Product
inner join Category on Product.CatId = Category.CatId
for xml auto, root


Path
This mode will convert each record as a parent element and every column as a nested element. There is no default attribute, but you can set custom attributes. In this mode, you can also use row, root, and elements. This mode is the best and better.


select Category.Name as Title, Product.Name from Product
inner join Category on Product.CatId = Category.CatId
for xml path

Use of custom parent path and root element in path mode.


select Category.Name as Title, Product.Name from Product
inner join Category on Product.CatId = Category.CatId
for xml path('Category'), Root('Products')

You can set attributes in this path mode very easily. But keep in mind that to assign attribute must be use @sign with the column name alias.

select Category.Name as [@Title], Product.Name from Product
inner join Category on Product.CatId = Category.CatId
for xml path('Category'), Root('Products')

Explicit
This mode is used to generate own custom XML structured format. So, you can choose it to generate own XML structure.

<ELEMENT>: The name of the element to which values will be assigned.
<TAG>: The tag number represents the level in hierarchy or depth.
<ATTRIBUTE>: The name of the attribute to which a particular column’s value will be assigned.
<DIRECTIVE>: This is optional and used to provide additional information for XML creation. We will look at one of its options "ELEMENT".


The first two columns are Tag and Parent and are Meta columns. These values determine the hierarchy. Moreover, this two-column name must be Tag and Parent. This name is required.


SELECT        1 AS TAG,
              NULL AS PARENT,
              c.Name AS [Category!1!Name],
              NULL AS [Sales!2!SaleID],
              p.Name AS [Sales!2!Product!ELEMENT],
              NULL AS    [Sales!2!Quantity!ELEMENT],
              NULL AS [Sales!2!Date!ELEMENT]
FROM          [Product] p
              INNER JOIN Category c ON p.CatId = c.CatId
              WHERE p.Id in (SELECT ProductId FROM Sales)
              UNION ALL
              SELECT        2 AS TAG,
              1 AS PARENT,
              c.Name AS [Category!1!Name],
              s.SaleId AS [Sales!2!SaleID],
              p.Name AS [Sales!2!Product!ELEMENT],
              s.Quantity AS    [Sales!2!Quantity!ELEMENT],
              s.Date AS [Sales!2!Date!ELEMENT]
FROM          [Product] p
              INNER JOIN Category c ON p.CatId = c.CatId
              INNER JOIN Sales s ON s.ProductId = p.Id
              WHERE p.Id = s.ProductId
              ORDER BY  [Category!1!Name], [Sales!2!Product!ELEMENT], [Sales!2!SaleID]
              FOR XML EXPLICIT

Hope this article would have helped you to understand about SQL XML clause. Here I have tried to explain very simply all terms of this clause. Happy coding and thanks for reading my article!

HostForLIFEASP.NET SQL Server 2021 Hosting

 

 



SQL Server 2021 Hosting - HostForLIFE :: Another Way To Implement The Incremental Load

clock April 22, 2022 08:06 by author Peter

In this article I'll discuss one of the functionalities of SQL SERVER: CDC (Change Data Capture). This feature has been present since the 2008 version. This presentation will be about the 2012 version.

What is the CDC?
The CDC Control task is used to control the life cycle of change data capture (CDC) packages. It handles CDC package synchronization with the initial load package, the management of Log Sequence Number (LSN) ranges that are processed in a run of a CDC package. In addition, the CDC Control task deals with error scenarios and recovery. [Microsoft Documentation]

The objective of the CDC is to optimize the integration of data (of the ETL process) by directly requesting the modifications made to a table instead of working on the entire table and thereby increasing processing times. Among other things, it allows basic auditing and synchronization between two databases.

Change data capture is a concept that is not specific to SQL Server (it's present in other DBMS such as Postgres, Oracle…), and which consists of tracking and recovering changes to data in a table.

The CDC was implemented at the SQL Server 2008 level, but only at the database engine level, and the concept of the Log Sequence Number (LSN) was used, which makes the implementation of the CDC under SSIS more complex.

The log sequence number (LSN) value is a three-part, uniquely incrementing value. It is used for maintaining the sequence of the transaction log records in the database. This allows SQL Server to maintain the ACID properties and to perform appropriate recovery actions.

Starting from SQL Server 2012, Microsoft goes further in its approach and introduced 3 main components to use the CDC directly in SSIS:

A CDC Control Task: managing the life cycle of CDC packages, and in particular all the mechanics of LSNs.

A Source CDC:  reads the information from a data capture exchange table

A CDC Splitter: redirects the rows according to whether they need to be inserted, updated, or deleted.

Setup Change Data Capture on the source database
On this table, I activate the CDC. Here it is the same as under SQL Server 2008, we find the same commands.

EXEC sp_changedbowner 'sa'

/* Activate the CDC */
EXEC sp_cdc_enable_db

/* Verify if the CDC is activated */
SELECT name, is_cdc_enabled
FROM sys.databases
WHERE name LIKE 'LearningDatabase'

/* Parametrize the CDC for the table STG.Employee  */
EXEC sys.sp_cdc_enable_table
@source_schema = N'SRC'
, @source_name = N'Employee'
, @role_name = NULL
, @supports_net_changes = 1


It is important to note that, there is no necessity for tables in the source database to include a column that indicates the date and time of the last modification. This means that no structural changes are needed in order to enable CDC for extraction.

Make sure that SQL Server Agent is running as a SQL Server Agent Job is used to capture CDC data.
Change Data Capture Control Flow Task in SSIS

Starting from SSIS 2012, the CDC Control Task was introduced as the new control flow task to implement the change data capture working with the CDC enabled databases and tables.

This new feature works on controlling the life cycle of change set for both CDC marked database and table :
    it enables the SSIS package to use CDC change set
    it applies the transfer of data as required, and finally
    it marks the change set as accomplished, or in case of an error it retains the change set for further analysis.

 The CDC Control Task holds the state of CDC into a package variable (defined when configuring the component), used later in CDC Data Flow components.

1. Starting the CDC for a table
Let’s start by configuring the CDC Control Task where we need to Mark CDC for Start.
Drag and drop a CDC Control Task into the package.

And follow the configuration like below:

 

  1. Set a connection manager (ADO.NET Connection Manager) to the source database.
  2. Set CDC Control Operation as: Mark CDC Start
  3. Set a variable of type string for CDC State.
  4. Set the connection for the database contains state data.
  5. Set the table for storing CDC state. You can create a new table here if you don’t have any table for it already : 

CREATE TABLE [dbo].[cdc_states]
([name] [nvarchar](256) NOT NULL,
 [state] [nvarchar](256) NOT NULL) ON [PRIMARY]
GO
CREATE UNIQUE NONCLUSTERED INDEX [cdc_states_name] ON
 [dbo].[cdc_states]
 ( [name] ASC )
 WITH (PAD_INDEX  = OFF) ON [PRIMARY]
GO

Verify or Set the State Name values.

We can now run the package. But what happens if the task is run successfully?
Actually since we didn’t set any actions for the change set, no data will be transferred.

The aim of this task is to set the CDC state in cdc_state table. Note that this task with the above configuration needs to be run one and only once.

If we query the cdc_states table, we can see that the state has a timestamp portion showing the data and time of the state storage.
By definition, this state represents the state of table stored so the SSIS can recognize the very first state of the Change Data Capture, and get the range of changes afterwards.

We will disable the task, as we don’t want to run it again.

2. Working with the Range of changes with CDC Control Task

The next step is creating two CDC control tasks: one for getting the range and the other for marking it. To better explain it, we need to retrieve the range of data that has updates and then mark it as processed.

    We need to create a new CDC Control Task and configure it exactly as we did for the CDC Control Task – Start, with changing the CDC control operation as “Get Processing Range”.

We need to create another CDC Control task same as previous one and set the CDC control operation as Mark Processed.

We need to place a Data Flow Task between these two tasks. We will leave the data flow empty as we should fill it in the next stage. The aim of using the Data Flow is to read the change set and execute the appropriate action based on the ETL load actions (delete, insert, or update)


3. Reading the Changed Set with CDC Source and CDC Splitter

In the data flow task we have to read the changed set with the help of CDC change set table, CDC data flow components (CDC Source, and CDC Splitter), and CDC state (stored by CDC Control Tasks). CDC Source is a component that read the change set and provide it as the main output so it can be used for CDC splitter. CDC Splitter split the change set into three data set: Inserted, Deleted, and Updated outputs. For this example, I’ve used a stage table as the destination of this step to be able to write result sets into it. my state table is exactly same as the source table plus single column for Status. I’ll fill the status column in Derived Column depends on the change happened on the data row.

Moving to the data flow task, we need to read the changed set using :

    the CDC change set table
    the CDC data flow components (CDC Source, and CDC Splitter)
    the CDC state (stored by CDC Control Tasks)

The CDC Source reads the change set and supply it as the input which will be used by the CDC Splitter

The CDC Splitter splits the change set into 3 branchs: Inserted - Updated - Deleted

    Create a CDC Source component in the data flow.
    Set ADO.NET connection manager to the source database which has CDC enabled.
    Set CDC enabled table name
    Verify the CDC processing mode to be Net :  the CDC processing mode is set to Net in order to capture the net changes rather than capturing all records
    Set CDC state variable the same variable that we’ve used in CDC Control Tasks.

After that, we created a CDC Splitter component after the CDC Source. (it doesn’t require any configuration, we need just to connect the CDC source to it)

We need to create a Derived Column transformation and connect InsertOutput/ UpdateOutput/ DeleteOutput of the CDC Splitter to it.

In each one of it, we need to create a Status Column in Derived Column and set its value as :

    0 for InsertOutput
    1 for UpdateOutput
    2 for DeleteOutput

 

 

 

Then we use a Union All transformation to integrate all three data outputs together, so we can load them into the staging table using an OLE DB Destination. Please note that we may encounter Data conversion issues between the source and the destination, in this case we may use a Data Conversion component.


HostForLIFEASP.NET SQL Server 2021 Hosting

 



SQL Server 2021 Hosting - HostForLIFE :: Examples Of DATE/DATETIME Conversion

clock April 12, 2022 09:50 by author Peter

I've noticed a bit of confusion when it comes to date conversion in T-SQL; recurring questions on how to strip the TIME part from a DATETIME variable, or how to convert between locales. Here we will see a fast method to split a DATETIME from its sub-parts DATE and TIME and how to reset the TIME part in a DATETIME.
 
We'll also see a method to quickly retrieve a list of all the possible conversion formats, applied to a certain date.
Let's consider the following script:

    DECLARE @myDateTime DATETIME  
    SET @myDateTime = '2015-05-15T18:30:00.340'  
      
    SELECT @myDateTime   
      
    SELECT CAST(@myDateTime AS DATE)  
    SELECT CAST(@myDateTime AS TIME)  
    SELECT CAST(CAST(@myDateTime AS DATE) AS DATETIME)   


I've created a DATETIME variable, named @myDateTime, and assigned to it the value "2015-05-15T18:30:00.340".

With the first SELECT, we simply print out that value.
 
But look at the three SELECTs that follow the first. We'll use the CAST function to convert between data types, asking, in the first case, to output our DATETIME as a DATE and in the second one, add a TIME type variable.
 
That will have the effect of suppressing the part of the DATETIME that we haven't asked for. Casting toward DATE will produce a variable from which the TIME part will be stripped, whereas converting towards TIME, we are asking to take away the DATE part from the DATETIME.

 

In the preceding example, we can see the result of those queries. Applying the logic seen a few lines ago, when we need to mantain a DATETIME, resetting (or setting to zero) its TIME part, we could use a double casting, as you can see in the fourth SELECT. First, we cast our DATETIME to a DATE (the internal cast of the two). That will produce a DATE-only variable. Then, with the second cast, we restore the type of the variable to its original one. But since the TIME part is now gone, the result will be in DATETIME format, with a zero TIME part.

Convert a Date in all possible formats
Sometimes we need to format a date depending on the specific locale, without remembering its conversion code. The following script will help us print all the conversion styles we can impose to a given date. It loops from a range of 0 - 255 (with many of those values not used for conversion that will be skipped thanks to the TRY/CATCH block), indicating which of those values return a valid conversion.

    DECLARE @myDateTime DATETIME    
    SET @myDateTime = '2015-05-15T18:30:00.340'   
      
    DECLARE @index INT  
    SET @index = 0  
    WHILE @index < 255  
    BEGIN  
      
       BEGIN try  
          DECLARE @cDate VARCHAR(25)  
          SET @cDate = CONVERT(NVARCHAR, GETDATE(), @index)  
          PRINT CAST(@index AS VARCHAR) + '   ' + @cDate  
       END try  
       BEGIN catch   
       END catch  
       SET @index = @index + 1  
    END 

We can insert an arbitrary value into the @myDateTime variable and run the script. We'll then obtain output like the following: 

Executing the code, we will print each CONVERT style, with its representation of our date. A quick reference to spot what we need in a specific context. I hope this helps!

HostForLIFEASP.NET SQL Server 2021 Hosting

 



SQL Server 2021 Hosting - HostForLIFE :: Using OPENJSON Function In SQL Server

clock April 5, 2022 08:37 by author Peter

In this article, let’s learn how to convert SQL Server data to JSON format. JSON format has become a standard way to represent data objects into strings. JSON format is commonly used in APIs to transfer data from one application to other via APIs.

You can convert a SQL query results in JSON format in SQL Server by simply adding FOR JASON clause to the query. FOR JASON is used with PATH and AUTO
SELECT name, surname
FROM emp
FOR JSON AUTO;


Here is a simple SQL query on Northwind database that returns 10 orders from the Orders table.
SELECT TOP (10) [OrderID]
      ,[OrderDate]
      ,[ShipName]
      ,[ShipAddress]
      ,[ShipCity]
      ,[ShipPostalCode]
      ,[ShipCountry]
  FROM [Northwind].[dbo].[Orders]


The output in SSMS looks like this.

Now, let’s add FOR JASON PATH clause at the end of the SQL query.

SELECT TOP (10) [OrderID]
      ,[OrderDate]
      ,[ShipName]
      ,[ShipAddress]
      ,[ShipCity]
      ,[ShipPostalCode]
      ,[ShipCountry]
  FROM [Northwind].[dbo].[Orders]
  FOR JSON PATH;

The new output looks like this -- that is a JSON object.

[{"OrderID":10248,"OrderDate":"1996-07-04T00:00:00","ShipName":"Vins et alcools Chevalier","ShipAddress":"59 rue de l'Abbaye","ShipCity":"Reims","ShipPostalCode":"51100","ShipCountry":"France"},{"OrderID":10249,"OrderDate":"1996-07-05T00:00:00","ShipName":"Toms Spezialitäten","ShipAddress":"Luisenstr. 48","ShipCity":"Münster","ShipPostalCode":"44087","ShipCountry":"Germany"},{"OrderID":10250,"OrderDate":"1996-07-08T00:00:00","ShipName":"Hanari Carnes","ShipAddress":"Rua do Paço, 67","ShipCity":"Rio de Janeiro","ShipPostalCode":"05454-876","ShipCountry":"Brazil"},{"OrderID":10251,"OrderDate":"1996-07-08T00:00:00","ShipName":"Victuailles en stock","ShipAddress":"2, rue du Commerce","ShipCity":"Lyon","ShipPostalCode":"69004","ShipCountry":"France"},{"OrderID":10252,"OrderDate":"1996-07-09T00:00:00","ShipName":"Suprêmes délices","ShipAddress":"Boulevard Tirou, 255","ShipCity":"Charleroi","ShipPostalCode":"B-6000","ShipCountry":"Belgium"},{"OrderID":10253,"OrderDate":"1996-07-10T00:00:00","ShipName":"Hanari Carnes","ShipAddress":"Rua do Paço, 67","ShipCity":"Rio de Janeiro","ShipPostalCode":"05454-876","ShipCountry":"Brazil"},{"OrderID":10254,"OrderDate":"1996-07-11T00:00:00","ShipName":"Chop-suey Chinese","ShipAddress":"Hauptstr. 31","ShipCity":"Bern","ShipPostalCode":"3012","ShipCountry":"Switzerland"},{"OrderID":10255,"OrderDate":"1996-07-12T00:00:00","ShipName":"Richter Supermarkt","ShipAddress":"Starenweg 5","ShipCity":"Genève","ShipPostalCode":"1204","ShipCountry":"Switzerland"},{"OrderID":10256,"OrderDate":"1996-07-15T00:00:00","ShipName":"Wellington Importadora","ShipAddress":"Rua do Mercado, 12","ShipCity":"Resende","ShipPostalCode":"08737-363","ShipCountry":"Brazil"},{"OrderID":10257,"OrderDate":"1996-07-16T00:00:00","ShipName":"HILARION-Abastos","ShipAddress":"Carrera 22 con Ave. Carlos Soublette #8-35","ShipCity":"San Cristóbal","ShipPostalCode":"5022","ShipCountry":"Venezuela"}]

Now, you can use this same return value from SQL query in your application to read JSON objects in your code.

Using the same method, you can convert a SQL Server Table to JSON by using a SELECT * or SELECT column names query on the entire table. The following SQL query converts all rows of a SQL Server table to a JSON string.

SELECT [OrderID]
      ,[OrderDate]
      ,[ShipName]
      ,[ShipAddress]
      ,[ShipCity]
      ,[ShipPostalCode]
      ,[ShipCountry]
  FROM [Northwind].[dbo].[Orders]
  FOR JSON PATH;


Here is a detailed article on JSON in SQL Server with various options.

HostForLIFEASP.NET SQL Server 2021 Hosting



SQL Server Hosting - HostForLIFE :: Table As Input Parameters For Stored Procedure

clock April 4, 2022 09:13 by author Peter

This article was initially written in 2021, we try to make it done now.  The content is mainly based on the MS article Table-Valued Parameters with some understanding and explanation, and with some examples to demo the results.

Introduction

Table-valued parameters were introduced to SQL Server in 2008. Table-valued parameters provide an easy way to marshal multiple rows of data from a client application to SQL Server without requiring multiple round trips or special server-side logic for processing the data.

This is the structure of this article,
    Introduction
    A - Passing Multiple Rows in Previous Versions of SQL Server
    B - What Table-Parameters is
    C - Passing a user-defined table type to a Stored Procedure in SQL Server
    D - Passing a user-defined table type to a Stored Procedure from C# Code

A - Passing Multiple Rows in Previous Versions of SQL Server
Before table-valued parameters were introduced to SQL Server 2008, the options for passing multiple rows of data to a stored procedure or a parameterized SQL command were limited. A developer could choose from the following options for passing multiple rows to the server:

    Use a series of individual parameters to represent the values in multiple columns and rows of data.
    Bundle multiple data values into delimited strings or XML documents and then pass those text values to a procedure or statement.
    Create a series of individual SQL statements for data modifications that affect multiple rows, such as those created by calling the Update method of a SqlDataAdapter.
    Use the bcp utility program or the SqlBulkCopy object to load many rows of data into a table.

The disadvantages of all methods above at least include one that the server side processing is necessary for them.

B - What Table-Parameters is

Table-valued parameters provide an easy way to marshal multiple rows of data from a client application to SQL Server without requiring multiple round trips or special server-side logic for processing the data. You can use table-valued parameters to encapsulate rows of data in a client application and send the data to the server in a single parameterized command. The incoming data rows are stored in a table variable that can then be operated on by using Transact-SQL.

Column values in table-valued parameters can be accessed using standard Transact-SQL SELECT statements. Table-valued parameters are strongly typed and their structure is automatically validated. The size of table-valued parameters is limited only by server memory.

There are several limitations to table-valued parameters:

  • You cannot pass table-valued parameters to CLR user-defined functions.
  • Table-valued parameters can only be indexed to support UNIQUE or PRIMARY KEY constraints. SQL Server does not maintain statistics on table-valued parameters.
  • Table-valued parameters are read-only in Transact-SQL code. 
  • You cannot use ALTER TABLE statements to modify the design of table-valued parameters.

C - Passing a user-defined table type to a Stored Procedure in SQL Server
1. Creating Table-Valued Parameter Types

Table-valued parameters are based on strongly typed table structures that are defined by using Transact-SQL CREATE TYPE statements. You have to create a table type and define the structure in SQL Server before you can use table-valued parameters in your client applications. We use database Northwind.
Use Northwind

CREATE TYPE dbo.CategoryTableType AS TABLE
    ( CategoryID int, CategoryName nvarchar(50) )

In the Microsoft SQL Server Management Studio, we can see the created type:
Database->Programmability->Types->User Define Table Types:


2. Creating Stored Procedures in SQL Server using the Table-valued Parameters Type
Table-valued parameters can be used in set-based data modifications that affect multiple rows by executing a single statement.

Update
CREATE PROCEDURE usp_UpdateCategories
    (@tvpEditedCategories dbo.CategoryTableType READONLY)
AS
BEGIN
    SET NOCOUNT ON
    UPDATE dbo.Categories
    SET Categories.CategoryName = ec.CategoryName
    FROM dbo.Categories INNER JOIN @tvpEditedCategories AS ec
    ON dbo.Categories.CategoryID = ec.CategoryID;
END


Note: that the READONLY keyword is required for declaring a table-valued parameter.

3. Run the Stored Procedure with Table-Valued Parameters (Transact-SQL)
Table-valued parameters can be used in set-based data modifications that affect multiple rows by executing a single statement.

Table Categories --- Before:

Run the Stored Procedure with Table-Valued Parameters:

DECLARE @tvpUpdateCategories AS dbo.CategoryTableType
INSERT INTO @tvpUpdateCategories([CategoryID], [CategoryName]) VALUES(8,'SeaFood1')
EXEC  dbo.usp_UpdateCategories @tvpUpdateCategories

Table Categories --- After

D - Passing a Table-Valued Parameter to a Stored Procedure from C# Code
We will skip this part, you may see the detailed implementation from the bottom on Table-Valued Parameters.

Note [ref]:
Normally we provide DbType of SqlParameter for a normal parameter like varchar, nvarchar, int, and so on as in the following code.
SqlParameter sqlParam= new SqlParameter();
sqlParam.ParameterName = "@StudentName";
sqlParam.DbType = DbType.String;
sqlParam.Value = StudentName;


But in the case of a Table parameter, we do not need to provide a DbType as the parameter data type. We need to provide SqlType rather than DbType, such as
SqlParameter Parameter = new SqlParameter;
Parameter.ParameterName = "@PhoneBook";
Parameter.SqlDbType = SqlDbType.Structured;
Parameter.Value = PhoneTable;

HostForLIFEASP.NET SQL Server 2019 Hosting



SQL Server 2021 Hosting - HostForLIFE :: Remove CONVERTS/CASTS From WHERE/JOIN Clauses

clock March 30, 2022 08:28 by author Peter

Quick Tip
Remove CONVERT/CAST from your WHERE clauses and JOINS when comparing to variables of different data types. Set their data types to match your table definitions before using them as a filter. Optimizing your queries this way will greatly reduce the amount of CPU time, reads, and I/O generated in your queries and allow your code to take better advantage of indexes.

Example
We are going to create a very simple stored procedure called ConvertExample. In this procedure we will see two things. One, the first procedure we create will declare two variables as VARCHAR( MAX) data types, then in the WHERE clause it will convert a table column called Modified Date and compare the variables to that value. You’ll note using SET STATISTICS IO, TIME ON, this code takes considerably more CPU time compared to our second example that will CONVERT the variables first then compare it to a field without having to CONVERT in the WHERE clause.
USE [AdventureWorks2017]
GO
CREATE OR ALTER PROCEDURE [dbo].[ConvertExample]
(
       @fromDate AS VARCHAR(MAX),
       @toDate AS VARCHAR(MAX)
)
AS
BEGIN
SELECT [SalesOrderID]
      ,[SalesOrderDetailID]
      ,[CarrierTrackingNumber]
      ,[OrderQty]
      ,[ProductID]
      ,[SpecialOfferID]
      ,[UnitPrice]
      ,[UnitPriceDiscount]
      ,[LineTotal]
      ,[rowguid]
      ,[ModifiedDate]
  FROM [AdventureWorks2017].[Sales].[SalesOrderDetail]
  WHERE CONVERT(varchar(8), [ModifiedDate],112) BETWEEN @fromDate and @toDate
END
SET STATISTICS IO,TIME ON
GO


Now, turn on Actual Execution Plans and execute the procedure.
Execute dbo.[ConvertExample] '20110501','20110531'

Query Plan and Statistics IO, TIME results

 

This code generates a warning on our SELECT and generates 219ms CPU time. Also note the estimated number of rows 10,918. Now let’s rewrite the code by setting the variables to match the datatype of the field we want to filter on, Modified Date, which is a datetime.

--------------------------------------------------------------------------
--REWRITE Convert Variables Instead, and REMOVE CONVERT from WHERE Clause
-----------------------------------------------------------------------------
CREATE OR ALTER PROCEDURE [dbo].[ConvertExample]
(
       @fromDate AS VARCHAR(MAX),
       @toDate AS VARCHAR(MAX)
)
AS
BEGIN
SET @fromDate= CONVERT(dateTime, @fromDate)
SET @toDate= CONVERT(dateTime, @toDate)
SELECT [SalesOrderID]
      ,[SalesOrderDetailID]
      ,[CarrierTrackingNumber]
      ,[OrderQty]
      ,[ProductID]
      ,[SpecialOfferID]
      ,[UnitPrice]
      ,[UnitPriceDiscount]
      ,[LineTotal]
      ,[rowguid]
      ,[ModifiedDate]
  FROM [AdventureWorks2017].[Sales].[SalesOrderDetail]
  WHERE [ModifiedDate] BETWEEN @fromDate and @toDate
END
-----------------------------------------------------------------------------
--RERUN The Proc
-----------------------------------------------------------------------------
Execute dbo.[ConvertExample] '20110501','20110531'

Query Plan and Statistics IO, TIME results for the second version.

 

Note the large difference in CPU time, it drops from 219ms to 15ms. You’ll also note the type conversion warning is gone from our SELECT statement in the plan and the estimated number of rows is now 356 instead of 10918. Lastly, we also now have a missing index warning—with the conversion in place the query optimizer was unable to identity the missing index.

Now for fun let’s add an index on Modified date using the code below. Then rerun the old procedure and compare it to the new procedure. You may be surprised on what you see.

USE [AdventureWorks2017]
GO
CREATE NONCLUSTERED INDEX [IDX_SalesOrderDetail_Modifed Date] ON [Sales].[SalesOrderDetail]
(
       [ModifiedDate] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF)
GO

Now, Recompile the first version of the procedure, then Execute and look at the plan. Then Recompile the 2nd version and Execute and compare. Here is the original run. Note it uses a Clustered Index Scan and bypasses our index.

Now let’s look at the new one. You’ll see that by removing the CONVERT for every row it compares, it now can take advantage of the index we created. You’ll also note there is now a Key Lookup, take a look at this blog to find out how you can further optimize this query by getting rid of the Key Lookup.

This was just a quick tip and reminder that whenever possible you should remove CONVERTS\CASTS from your WHERE clauses and set the variables to proper data types instead. This same logic applies to JOINS. Do this not only to reduce the CPU time, I/O and reads, but to also take advantage of your indexes.

HostForLIFEASP.NET SQL Server 2021 Hosting

 



SQL Server Hosting - HostForLIFE :: Create SQL Server Database With Pre-Defined Schema

clock March 29, 2022 08:02 by author Peter

In many situations, we need to create a database at runtime. A few years ago I worked on a project which was a multi tenant system where we create a new database for each tenant. For this task we followed a few steps like creatoing a blank database, executing schema script (tables, functions, stored procedures, views, triggers, etc), then executing master data. In this process there are 2 major issues we faced, one is it takes time to execute all this from application, as connection should be live while doing these steps, and exception handling and rollback.

Basically, we need to understand how a database gets created. Whenever we create a new database in SQL Server it creates a copy of model database from system databases. It means that whatever there in model database will get replicated to the newly created database, like whole schema and data inside all tables.

To resolve this problem we have a solution which is very easy and less effort to fulfill this requirement. In SQL Server we have 4 default DBs inside System Databases. We need to use model DB for this. You can see this in below image,

We can add table schema, stored procedure, views, functions, and so on. I added one table and one stored procedure in model database as shown below,


Now when you create a new database whether it is from SSMS or from an application, it will replicate model db.

Now you can see model schema got replicated to newly created Database.


Here you can see one table with data, and one stored procedure got created. This is what I tried from SSMS, now we can check with C# code. please refer below,

And same DB got created as below,

This way we can use model database to generate pre-defined schema and data.

HostForLIFEASP.NET SQL Server 2019 Hosting



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in