European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

SQL Server Hosting - HostForLIFE :: SQL Vs TSQL

clock May 5, 2021 06:29 by author Peter

SQL
SQl was initially developed by IBM and later ANSI,  or the  American National Standards Institute, which made it standard. 'SQL' stands for Structured Query Language.
SQL is a open source standard.SQL is a Non-procedural language type means, sql statements are executed one at a time.
 
SQL (Structured Query Language) is a standard language for querying and modifying data in the database. SQL allow you to define, select, modify data.It is used acreoss many different types of databases.It is used for manipulating data when large amount of information is stored.
DDL,DML commands that used for communicating with database like storing ,retrieving and manipulating data.
 
T-SQL
T-SQL is known as Transact Structured Query Language, which is the product of Microsoft. The purpose of T-SQL is used to provide the sets of tools for the development of a transactional databases.TSQL is an extension of the SQL language. It mainly used for creating application.
 
T-SQL is a proprietary standard.T-SQL is a procedural language of SQL, means code will be executed as a block logically and strctured order.
T-SQL contains procedural programming and local variables. In T-SQL there is a block that used to write function and procedure, there is no interaction with database.   

HostForLIFEASP.NET SQL Server 2019 Hosting



SQL Server Hosting - HostForLIFE :: How To Reset Identity Column Values In SQL Server?

clock May 3, 2021 07:59 by author Peter

An Identity column in SQL Server can be used for generating identity values of a table. SQL IDENTITY property on a column is used to generate an IDENTITY column. The Identity column takes two values, seed and increment. Each new value is generated based on the current seed & increment.
    IDENTITY [ (seed , increment) ]  

Here seed is the value that is used for the very first row loaded into the table and increment is the incremental value that is added to the identity value of the previous row that was loaded.
 
Here is the sample demonstration for resetting identity column values in SQL Server.

Step 1
Create a table.
    CREATE TABLE dbo.Emp  
    (  
    ID INT IDENTITY(1,1),  
    Name VARCHAR(10)  
    )  

Step 2
Insert some sample data.
    INSERT INTO dbo.Emp(name)   
    VALUES ('Peter')  
    INSERT INTO dbo.Emp(Name)  
    VALUES ('Peter Scott') 

When we run the above query, the second Insert statement will fail because of the varchar(10) length.

Step 3
Check the identity column value.
    DBCC CHECKIDENT ('Emp')  

Even though the second insert failed but the identity value is increased, if we insert another record, the identity value is 3.
    INSERT INTO dbo.Emp(Name)  
    VALUES ('Scott')  
      
    SELECT * FROM Emp  

id name


Step 4
 
Reset the identity column value.
    DELETE FROM EMP WHERE ID=3  
      
    DBCC CHECKIDENT ('Emp', RESEED, 1)  
      
    INSERT INTO dbo.Emp(Name)  
    VALUES ('Scott')  
      
    SELECT * FROM Emp 


HostForLIFEASP.NET SQL Server 2019
Hosting


 



SQL Server Hosting - HostForLIFE :: Simple SELECT, INSERT, UPDATE and DELETE Using LINQ to SQL

clock April 27, 2021 08:57 by author Peter

Language-INtegrated Query (LINQ) is a Microsoft .NET Framework component that adds native data querying capabilities to .NET languages. In other words LINQ has the power of querying on any source of data (Collection of objects, database tables or XML Files). We can easily retrieve data from any object that implements the IEnumerable<T> interface and any provider that implements the IQueryable<T> interface.

Microsoft basically divides LINQ into the following three areas:

  • LINQ to Object : Queries performed against in-memory data
  • LINQ to ADO.Net
  • LINQ to SQL (formerly DLinq) : Queries performed against the relation database; only Microsoft SQL Server is supported.
  • LINQ to DataSet : Supports queries by using ADO.NET data sets and data tables.
  • LINQ to Entities : Microsoft ORM solution
  • LINQ to XML (formerly XLinq) : Queries performed against the XML source.


LINQ to SQL
LINQ to SQL translates our actions to SQL and submits the changes to the database. Here we will perform Select, Insert, Update and Delete operations on a COURSE table.

Step 1: Create a COURSE Table in the database

Step 2: Create a ContextData file using the Object Relational Designer:
Create a new item, select the LINQ to SQL classes (as shown in the following figure) and name it Operation.dbml.

Step 2: Create a ContextData file using the Object Relational Designer:
Create a new item, select the LINQ to SQL classes (as shown in the following figure) and name it Operation.dbml.

After clicking the Add button the ContextData file is created. Now we should drag all the tables onto the left-hand side of the designer and save (as shown in the following figure). This will create all the mappings and settings for each table and their entities.


For .dbml files the database connection string is defined in the web.config file as:
    <connectionStrings>  
    <add name="DevelopmentConnectionString" connectionString="Data Source=sandeepss-PC;Initial Catalog=Development;User ID=sa;  
    Password=*******" providerName="System.Data.SqlClient" />  
     </connectionStrings>


We can use a connection string from the web.config file or we can pass a connection string as a parameter in the constructor of the DataContext class to create an object of the DataContext class.

The SELECT Operation
    private void GetCourses()  
    {  
          //create DataContext object  
          OperationDataContext OdContext = new OperationDataContext();  
          var courseTable = from course in OdContext.GetTable<COURSE>() select course;  
          //grdCourse is gridview id  
          grdCourse.DataSource = courseTable;  
          grdCourse.DataBind();  
    }


The INSERT Operation
    private void AddNewCourse()  
    {  
          //Data maping object to our database  
          OperationDataContext OdContext = new OperationDataContext();  
          COURSE objCourse = new COURSE();  
          objCourse.course_name = "B.Tech";  
          objCourse.course_desc = "Bachelor Of Technology";  
          objCourse.modified_date = DateTime.Now;  
          //Adds an entity in a pending insert state to this System.Data.Linq.Table<TEntity>and parameter is the entity which to be added  
          OdContext.COURSEs.InsertOnSubmit(objCourse);  
          // executes the appropriate commands to implement the changes to the database  
          OdContext.SubmitChanges();  
    }

The Update Operation
    private void UpdateCourse()  
    {  
          OperationDataContext OdContext = new OperationDataContext();  
          //Get Single course which need to update  
          COURSE objCourse = OdContext.COURSEs.Single(course => course.course_name == "B.Tech");  
          //Field which will be update  
          objCourse.course_desc = "Bachelor of Technology";  
          // executes the appropriate commands to implement the changes to the database  
          OdContext.SubmitChanges();  
    }

The DELETE Operation
    private void DeleteCourse()  
    {  
          OperationDataContext OdContext = new OperationDataContext();  
          //Get Single course which need to Delete  
          COURSE objCourse = OdContext.COURSEs.Single(course => course.course_name == "B.Tech");  
          //Puts an entity from this table into a pending delete state and parameter is the entity which to be deleted.  
          OdContext.COURSEs.DeleteOnSubmit(objCourse);  
          // executes the appropriate commands to implement the changes to the database  
          OdContext.SubmitChanges();  
    }

To perform select, insert, update and delete operations we create a table and create a data context class; in other words a dbml file. In this file designer view we drag and drop the COURSE table from the Server Explorer. This data context class is an Object and table mapping and we perform the operation on the object and database updated according to the action using the submitChanges() method.

HostForLIFEASP.NET SQL Server 2019 Hosting



SQL Server Hosting - HostForLIFE :: SQL Server Performance Tuning Tips

clock April 19, 2021 07:18 by author Peter

In this article, we will learn about SQL Server performance tuning tips with examples.
 
Database
The Database is the most important and powerful part of any application. If your database is not working properly and taking a long time to compute the result, this means something is going wrong in the database. Here, database tune-up is required, otherwise, the performance of the application will degrade.

I know a lot of articles already published on this topic. But in this article, I tried to provide a list of database tune-up tips that will cover all the aspects of the database. Database tuning is a very critical and fussy process. It is true that database tuning is a database admin task but we should have the basic level of knowledge for doing this. Because, if we are working on a project where there is no role of admin, then it is our responsibility to maintain the performance of the database. If the performance of the database is degraded, then it will cause the worst effect on the whole system.
 
In this article, I will explain some basic database tuning tips that I learned from my experience and from my friends who are working as a database administrator. Using these tips, you can maintain or upgrade the performance of your database system. Basically, these tips are written for SQL Server but we can implement these into another database too, like Oracle and MySQL. Please read these tips carefully and at the end of the article, let me know if you find something wrong or incorrect.
 
Avoid Null value in the fixed-length field
We should avoid the Null value in fixed-length fields because if we insert the NULL value in a fixed-length field, then it will take the same amount of space as the desired input value for that field. So, if we require a null value in a field, then we should use a variable-length field that takes lesser space for NULL. The use of NULLs in a database can reduce the database performance, especially,  in WHERE clauses. For example, try to use varchar instead of char and nvarchar.
    Never use Select * Statement:  

When we require all the columns of a table, we usually use a “Select *” statement. Well, this is not a good approach because when we use the “select *” statement, the SQL Server converts * into all column names before executing the query, which takes extra time and effort. So, always provide all the column names in the query instead of “select *”.
 
Normalize tables in a database
Normalized and managed tables increase the performance of a database. So,  always try to perform at least 3rd normal form. It is not necessary that all tables require a 3NF normalization form, but if any table contains 3NF form normalization, then it can be called well-structured tables.
 
Keep Clustered Index Small
Clustered index stores data physically into memory. If the size of a clustered index is very large, then it can reduce the performance. Hence, a large clustered index on a table with a large number of rows increases the size significantly. Never use an index for frequently changed data because when any change in the table occurs, the index is also modified, and that can degrade performance.
 
Use Appropriate Datatype
If we select an inappropriate data type, it will reduce the space and enhance the performance; otherwise, it generates the worst effect. So, select an appropriate data type according to the requirement. SQL contains many data types that can store the same type of data but you should select an appropriate data type because each data type has some limitations and advantages upon another one.
 
Store image path instead of the image itself
I found that many developers try to store the image into the database instead of the image path. It may be possible that it is a requirement of the application to store images into a database. But generally, we should use an image path, because storing image in a database increases the database size and reduces performance.
 
USE Common Table Expressions (CTEs) instead of Temp table
We should prefer a CTE over the temp table because temp tables are stored physically in a TempDB which is deleted after the session ends. While CTEs are created within memory. Execution of a CTE is very fast as compared to the temp tables and very lightweight too.
 
Use Appropriate Naming Convention
The main goal of adopting a naming convention for database objects is to make it easily identifiable by the users, their type, and the purpose of all objects contained in the database. A good naming convention decreases the time required in searching for an object. A good name clearly indicates the action name of any object that it will perform.
    * tblEmployees // Name of table  
    * vw_ProductDetails // Name of View  
    * PK_Employees // Name of Primary Key  


Use UNION ALL instead of UNION
We should prefer UNION ALL instead of UNION because UNION always performs sorting that increases the time. Also, UNION can't work with text datatype because text datatype doesn't support sorting. So, in that case, UNION can't be used. Thus, always prefer UNION All.
 
Use Small data type for Index
It is very important to use a Small data type for the index. Because the bigger size of the data type reduces the performance of the Index. For example, nvarhcar(10) uses  20 bytes of data, and varchar(10) uses 10 bytes of the data. So, the index for the varchar data type works better. We can also take another example of DateTime and int. Datetime data type takes 8 Bytes and int takes 4 bytes. A small datatype means less I/O overhead that increases the performance of the index.
    Use Count(1) instead of Count(*) and Count(Column_Name):  

There is no difference in the performance of these three expressions; but, the last two expressions are not well considered to be a good practice. So, always use count(10) to get the numbers of records from a table.
 
Use Stored Procedure
Instead of using the row query, we should use the stored procedure because stored procedures are fast and easy to maintain for security and large queries.
 
Use Between instead of In
If Between can be used instead of IN, then always prefer Between. For example, you are searching for an employee whose id is either 101, 102, 103, or 104. Then, you can write the query using the In operator like this:
    Select * From Employee Where EmpId In (101,102,103,104)  

You can also use Between operator for the same query.
    Select * from Employee Where EmpId Between 101 And 104  

Use If Exists to determine the record
 
It has been seen many times that developers use "Select Count(*)" to get the existence of records. For example
    Declare @Count int;  
    Set @Count=(Select * From Employee Where EmpName Like '%Pan%')  
    If @Count>0  
    Begin  
    //Statement  
    End  

But, this is not a proper way for such type of queries. Because, the above query performs the complete table scan, so you can use If Exists for the same query. That will increase the performance of your query, as below.
    IF Exists(Select Emp_Name From Employee Where EmpName Like '%Pan%')  
    Begin  
    //Statements  
    End  


Never Use ” Sp_” for User Define Stored Procedure
Most programmers use “sp_” for user-defined Stored Procedures. I suggest to never use “sp_” for user-defined Stored Procedure because in SQL Server, the master database has a Stored Procedure with the "sp_" prefix. So, when we create a Stored Procedure with the "sp_" prefix, the SQL Server always looks first in the Master database, then in the user-defined database, which takes some extra time.
 
Practice to use Schema Name
A schema is an organization or structure for a database. We can define a schema as a collection of database objects that are owned by a single principle and form a single namespace. Schema name helps the SQL Server finding that object in a specific schema. It increases the speed of the query execution. For example, try to use [dbo] before the table name.
 
Avoid Cursors
A cursor is a temporary work area created in the system memory when a SQL statement is executed. A cursor is a set of rows together with a pointer that identifies the current row. It is a database object to retrieve the data from a result set one row at a time. But, the use of a cursor is not good because it takes a long time because it fetches data row by row. So, we can use a replacement of cursors. A temporary table for or While loop may be a replacement of a cursor in some cases.
 
SET NOCOUNT ON
When an INSERT, UPDATE, DELETE, or SELECT command is executed, the SQL Server returns the number affected by the query. It is not good to return the number of rows affected by the query. We can stop this by using NOCOUNT ON.
 
Use Try–Catch
In T-SQL, a Try-Catch block is very important for exception handling. A best practice and use of a Try-Catch block in SQL can save our data from undesired changes. We can put all T-SQL statements in a TRY BLOCK and the code for exception handling can be put into a CATCH block.
 
Remove Unused Index
Remove all unused indexes because indexes are always updated when the table is updated so the index must be maintained even if not used.
 
Always create an index on the table
An index is a data structure to retrieve fast data. Indexes are special lookup tables that the database search engine can use to speed up data retrieval. Simply an index is a pointer to data in a table. Mainly an index increases the speed of data retrieval. So always try to keep a minimum of one index on each table it may be either clustered or non-clustered index.
 
Use Foreign Key with the appropriate action
A foreign key is a column or combination of columns that is the same as the primary key, but in a different table. Foreign keys are used to define a relationship and enforce integrity between two tables. In addition to protecting the integrity of our data, FK constraints also help document the relationships between our tables within the database itself. Also, define an action rule for the delete and update command, you can select any action among the No Action, Set NULL, Cascade, and set default.
 
Use Alias Name
Aliasing renames a table or a column temporarily by giving another name. The use of table aliases means to rename a table in a specific SQL statement. Using aliasing, we can provide a small name to a large name that will save our time.
 
Use Transaction Management
A transaction is a unit of work performed against the database. A transaction is a set of work (T-SQL statements) that execute together like a single unit in a specific logical order as a single unit. If all the statements are executed successfully then the transaction is complete and the transaction is committed and the data will be saved in the database permanently. If any single statement fails then the entire transaction will fail and then the complete transaction is either canceled or rolled back.
 
Use Index Name in Query
Although in most cases the query optimizer will pick the appropriate index for a specific table based on statistics, sometimes it is better to specify the index name in your SELECT query.
 
Example
    SELECT  
    e.Emp_IId,  
    e.First_Name,  
    e.Last_Name  
    FROM dbo.EMPLOYEE e  
    WITH (INDEX (Clus_Index))  
    WHERE e.Emp_IId > 5  
    Select Limited Data  


We should retrieve only the required data and ignore the unimportant data. The fewer data retrieved, the faster the query will run. Rather than filtering on the client, push as much filtering as possible on the server-end. This will result in less data being sent on the wire and you will see results much faster.
 
Drop Index before Bulk Insertion of Data

We should drop the index before the insertion of a large amount of data. This makes the insert statement run faster. Once the inserts are completed, you can recreate the index again.
 
Use Unique Constraint and Check Constraint
A Check constraint checks for a specific condition before inserting data into a table. If the data passes all the Check constraints then the data will be inserted into the table otherwise the data for insertion will be discarded. The CHECK constraint ensures that all values in a column satisfy certain conditions.
 
A Unique Constraint ensures that each row for a column must have a unique value. It is like a Primary key but it can accept only one null value. In a table, one or more column can contain a Unique Constraint. So we should use a Check Constraint and Unique Constraint because it maintains the integrity in the database.
 
Importance of Column Order in index
If we are creating a Non-Clustered index on more than one column then we should consider the sequence of the columns. The order or position of a column in an index also plays a vital role in improving SQL query performance. An index can help to improve the SQL query performance if the criteria of the query match the columns that are left most in the index key. So we should place the most selective column on left most side of a non-clustered index.
 
Recompiled Stored Procedure
We all know that Stored Procedures execute T-SQL statements in less time than a similar set of T-SQL statements are executed individually. The reason is that the query execution plan for the Stored Procedures is already stored in the "sys. procedures" system-defined view. We all know that recompilation of a Stored Procedure reduces SQL performance. But in some cases, it requires recompilation of the Stored Procedure. Dropping and altering of a column, index, and/or trigger of a table. Updating the statistics used by the execution plan of the Stored Procedure. Altering the procedure will cause the SQL Server to create a new execution plan.
 
Use Sparse Column
Sparse columns provide better performance for NULL and Zero data. If you have any column that contains large amounts numbers of NULL and Zero then prefer Sparse Column instead of the default column of SQL Server. The sparse column takes lesser space than the regular column (without the SPARSE clause).
 
Example
    Create Table Table_Name  
    (  
    Id int, //Default Column  
    Group_Id int Sparse // Sparse Column  
    )  


Avoid Loops In Coding
Suppose you want to insert 10 records into the table then instead of using a loop to insert the data into the table you can insert all data using a single insert query.
    declare @int int;  
    set @int=1;  
    while @int<=10  
    begin  
    Insert Into Tab values(@int,'Value'+@int);  
    set @int=@int+1;  
    end  

The above method is not a good approach to insert the multiple records instead of this you can use another method like below.
    Insert Into Tab values(1,'Value1'),(2,'Value2'),(3,'Value3'),(4,'Value4'),(5,'Value5'),(6,'Value6'),(7,'Value7'),(8,'Value8'),(9,'Value9'),(10,'Value10');  

Avoid Correlated Queries
In A Correlated query inner query take input from the outer(parent) query, this query runs for each row that reduces the performance of the database.
    Select Name, City, (Select Company_Name  
    from  
    Company where companyId=cs.CustomerId) from Customer cs  


The best method is that we should prefer the join instead of the correlated query as below.
    Select cs.Name, cs.City, co.Company_Name  
    from Customer cs  
    Join  
    Company co  
    on  
    cs.CustomerId=co.CustomerId  


Avoid index and join hints
In some cases, index and join hint may increase the performance of a database, but if you provide any join or index hint then the server always tries to use the hint provided by you although it has a better execution plan, so such type of approach may reduce the database performance. Use Join or index hint if you are confident that there is not any better execution plan. If you have any doubt then make the server free to choose an execution plan.
 
Avoid Use of Temp table
Avoid the use of a temp table as much as you can because a temp table is created into a temp database like any basic table structure. After completion of the task, we require to drop the temp table. That raises the load on the database. You can prefer the table variable instead of this.
 
Use Index for required columns
The index should be created for all columns which are using the Where, Group By, Order By, Top, and Distinct command.
 
Don't use Index
It is true that the use of an index makes the fast retrieval of the result. But, it is not always true. In some cases, the use of index doesn't affect the performance of the query. In such cases, we can avoid the use of the index.

    When the size of the table is very small.
    The index is not used in the query optimizer
    DML(insert, Update, Delete) operations are frequently used.
    Column contains TEXT, nText type of data.

Use View for complex queries
If you are using join on two or more tables and the result of queries is frequently used, then it will be better to make a View that will contain the result of the complex query. Now, you can use this View multiple times, so that you don't have to execute the query multiple times to get the same result.
 
Make Transaction short
It will be better to keep the transaction as short as we can. Because the big size of transactions makes the table locked and reduces the database concurrency. So, always try to make shorter transactions.
 
Use Full-text Index
If your query contains multiple wild card searches using LIKE(%%), then the use of Full-text Index can increase the performance. Full-text queries can include simple words and phrases or multiple forms of a word or phrase. A full-text query returns any document that contains at least one match (also known as a hit). A match occurs when a target document contains all the terms specified in the Full-text query and meets any other search conditions, such as the distance between the matching terms.
 
Thanks for reading the article. As I have asked in the starting, if you have any doubt or I wrote something wrong, then write me back in the comments section.

HostForLIFEASP.NET SQL Server 2019 Hosting

 



SQL Server Hosting - HostForLIFE :: How To Insert SharePoint List Item To MS SQL Server Using PowerShell?

clock April 15, 2021 08:49 by author Peter

This article demonstrates how to Insert the SharePoint Online List item to On-Premise Microsoft SQL Table using PowerShell. It starts with the introduction of the Get-PnPListItem command available in SharePointPnpPowerShellOnline PowerShell module to read List items from SharePoint List. After that, it demonstrates how to create the SQL query and use Invoke-SQLcmd to create rows for each item in SQL Table. In the end, the article discusses how to Update Column MoveToSQL in the SharePoint List item,  so that in next run of PowerShell code it does not insert the same items in SQL table.
 
This PowerShell script can be used as either a one time activity to insert SharePoint List Item to SQL Table or used as a scheduler so it keeps inserting the List items into Microsoft SQL Table.
 
Pre-requisites (Environment Details)
    Windows PowerShell
    SharePointPnpPowerShellOnline Module

Please install the SharePointPnpPowerShellOnline module if it’s not already present using the below command.

    Install-Module SharePointPnPPowerShellOnline  

SharePoint List sample columns with data


Micrsoft SQL server Table without data


Variables Explanations in this Article
    $MSSQLServerInstanceName="ContosoServer\InstanceName" - on-premise SQL Server Instance
    $DatabaseName="DBTest" - on-premise SQL Database Name
    $TableName="Product" - SQL Table Name
    $O365ServiceAccount="[email protected]" - Your Service Account Name
    $O365ServiceAccountPwd="abc@2020" - Your Service Account password
    $siteURL="https://abc.sharepoint.com/sites/test" - SharePoint Site URL from where the item will be read
    $List="Product" - List Name


Here you can see we have provided the password in plain text which you need to provide if you want this PowerShell script to run automatically through Timer Job or Scheduler.
 
For manual execution please use the  Get-Credential command to read the user name and password from the user input.
 
Read SharePoint List Items
SharePointPnpPowerShellOnline Module of PowerShell has made developers' lives easier. To read the all the list items from a list use the code snippet as below.
 
Please find all the parameter associated with Get-PnpListItem

    $Items=Get-PnPListItem -List $List  

Here $Items variable will hold all the items from the List.
 
Before reading the list you should be connecting to the SharePoint Site using the below snippet:

    [SecureString]$SecurePass = ConvertTo-SecureString $O365ServiceAccountPwd -AsPlainText –Force    
    [System.Management.Automation.PSCredential]$PSCredentials = New-Object System.Management.Automation.PSCredential($O365ServiceAccount, $SecurePass)    
    Connect-PnPOnline -Url $siteURL -Credentials $PSCredentials     

Create SQL Query and Invoke-SQLcmd
 
Once the PowerShell Script has read all the items of the list and stores in a Variable, the next step is to read the List item form a SQL query with column values and insert to SQL Table one by one in for loop using Invoke-sqlcmd.
 
If Invoke-sqlcmd is not found as PowerShell Module , please install the SQL Server PowerShell Module using the command below:

    Install-Module -Name SqlServer  

The below snippet will read items from List Items Collection variable $Items and insert the items into SQL table.
    #Loop through the items  
    foreach($Item in $Items) {  
            $MovedToSQL = $Item["MovedToSQL"]  
            # check  
            if item already moved to SQL  
            if ($MovedToSQL - ne "Yes") {  
                #  
                $ productName = $Item["ProductName"]  
                $ productDescription = $Item["ProductDescription"]  
                $ productCost = $Item["ProductCost"]  
                $ productMake = $Item["ProductMake"]  
                # Insert query  
                for SQL Table  
                $insertquery = "    
                INSERT INTO[$DatabaseName]. [$TableName]  
                    ([ProductName], [ProductDescription], [ProductCost], [ProductMake])  
                VALUES('$productName ', '$productDescription ', '$productCost', '$productMake')  
                GO "     


Invoke SQLcmd command to insert the item into SQL table based on Query generated in the above code snippet:
    Invoke-SQLcmd -ServerInstance $MSSQLServerInstanceName -query $insertquery -Database $DatabaseName  

Update MoveToSQL Column of SharePoint List
 
This step is necessary if we want to avoid duplication of the same data in SQL table . Once the Item is inserted into the SQL Table, update “MoveToSQL” Column of the SharePoint List Item with value “Yes” using Set-PnPListItem PowerShell Command.
 
Below is the code snippet to update the list item
    Set-PnPListItem -List $List -Identity $Item.Id -Values @{ "MovedToSQL"="Yes"}  

Complete Powershell script
    #This script to pull the SharePoint Online List data and move into SQL  
    #created by Vinit Kumar  
    #SQL data base information - variables - Please change  
    $MSSQLServerInstanceName = "ContosoServer\InstanceName"  
    $DatabaseName = "DBTest"  
    $TableName = " Product "  
    # SharePoint Variables - Please change  
    $O365ServiceAccount = "[email protected]"  
    $O365ServiceAccountPwd = "abc@2020"  
    $siteURL = "https://abc.sharepoint.com/sites/test"  
    $List = " Product "  
    #Connect to SharePoint Online[SecureString] $SecurePass = ConvertTo - SecureString $O365ServiceAccountPwd - AsPlainText - Force[System.Management.Automation.PSCredential] $PSCredentials = New - Object System.Management.Automation.PSCredential($O365ServiceAccount, $SecurePass)  
    Connect - PnPOnline - Url $siteURL - Credentials $PSCredentials  
    #Get Sharepoint List items  
    $Items = Get - PnPListItem - List $List  
    #Loop through the items  
    foreach($Item in $Items) {  
        $MovedToSQL = $Item["MovedToSQL"]  
        # check  
        if item already moved to SQL  
        if ($MovedToSQL - ne "Yes") {  
            #  
            $productName = $Item["ProductName"]  
            $productDescription = $Item["ProductDescription"]  
            $productCost = $Item["ProductCost"]  
            $productMake = $Item["ProductMake"]  
            # Insert query  
            for SQL Table  
            $insertquery = "    
            INSERT INTO[$DatabaseName]. [$TableName]  
                ([ProductName], [ProductDescription], [ProductCost], [ProductMake])  
            VALUES('$productName  , '$productDescription  , '$productCost ', '$productMake ')  
            GO "    
            #Invoke SQLcmd to insert the item into SQL table  
            Invoke - SQLcmd - ServerInstance $MSSQLServerInstanceName - query $insertquery - Database $DatabaseName  
            #Update the SharePoint List once item moved and update the Column "MovedToSQL" = "Yes"  
            Set - PnPListItem - List $List - Identity $Item.Id - Values @ {  
                "MovedToSQL" = "Yes"  
            }  
        }  
    }   


Result after execution of Script
Once the PowerShell has executed we have updated SharePoint List and SQL table as below:

SharePoint List Output

SQL Table Output

HostForLIFEASP.NET SQL Server 2019 Hosting



SQL Server Hosting - HostForLIFEASP.NET :: Quickly Blocking Inbound Connections To SQL Server

clock March 29, 2021 07:32 by author Peter

Sometimes the brilliance of a solution is lost in its simplicity. Full disclaimer, this was Joey D ’Antoni’s (B|T) idea, however, he has given me permission to blog about it.  Recently, I were helping a client to reconfigure their production SQL Server instances to have a new and improved TempDB configuration. We knew that this would require an outage to ensure the changes had correctly taken affect.
 
Of course, we ran into a snag. The current configuration had 30 plus tempdb data files and any attempts to delete the files were blocked because they were actively being used. Nothing we tried would give us the ability to delete the excess files.
 
We even tried to restart the instance into single user mode, however, every time that happened something else would take the connection before we could get into the instance. We eventually restarted the SQL Server instance to normal operation so that we could investigate why we could not get a connection when in single user mode.
 
Turns out that with the production nature of the instance, the clients large farm of application servers was connecting to it faster than we could. This was discovered by using sp_who2, however, you could use the DMV sys.dm_exec_connections to see what is connecting to the instance if you desired. So, we needed a way to block incoming connections while not being evasive like shutting down the application servers or a large network change.
 
This is where the brilliance comes in.
 
Disable TCP/IP as a network protocol to SQL Server. BOOM. Instant firewall.
 
How To
If you open SQL Server Configuration Manager, you will probably see something like this. If you have more facets installed or multiple named instances installed, your list on the right-hand side might look a bit larger but for the most part it should be similar.

Expand SQL Server Network Configuration in the left-hand menu and select Protocols for the instance you want wish to block connections to. In this case, I just have the default instance installed on my laptop.

On the right hand side, notice that the TCP/IP protocol is correctly enabled. You can disable that by double clicking on TCP/IP and then changing the Enabled option to reflect No.

An alternative method is that you can also simply right click on the protocol and choose to enable or disable it.

 

With either method, you will acknowledge that the changes will not take effect until the service has been stopped and restarted.

 
Go ahead and restart the service. Once back online, the TCP/IP protocol will be disabled and any servers wishing to connect will be effectively denied. Keep in mind that we had local access to the server itself and retained connectivity to it even though SQL Server no longer allowed TCP/IP connections. Utilizing the shared memory protocol (it was the only thing enabled), we were able to restart into single user mode, get connected, and successfully remove the extraneous tempdb data files.

Once the tempdb work was completed, we re-enabled the TCPIP protocol, and then restarted the instance back into normal operations. Double checking sp_who2, we also verified that all the application servers had resumed connectivity to the SQL Server instance.

Summary

This is an example of a quick and effortless way to block all inbound connections to the SQL Server instance. This method required zero intervention from any engineers to stop application servers or implement firewall changes. A few simple clicks of the mouse and we were back in business.

Sometimes it is the simple things that are astonishing and brilliant.

HostForLIFEASP.NET SQL Server Hosting



SQL Server Hosting - HostForLIFEASP.NET :: Change Data Capture (CDC) In SQL Server

clock March 23, 2021 07:26 by author Peter

Every developer who has worked with SQL SERVER sooner or later has come across this problem, where he or she has to take a copy of the row/s before performing any DML operations, and the table in which it is copied is generally marked as ‘tablename_history’ or ‘tablename_backup’ and this is achieved by writing an insert query in a stored procedure or trigger whichever found appropriate.
 
Recently I stumbled upon a system function in the SQL SERVER called Change Data Capture (CDC in short), which does the above function(if enabled) asynchronously by default and is supported by all versions higher than SQL Server 2008.
 
Enabling Change Data Capture
To implement CDC we first need to enable CDC on a database, this is done by executing the stored procedure "sys.sp_cdc_enable_db" as given below.
    -- To Enable CDC  
    USE [CDC_TEST]  
    GO  
    EXEC sys.sp_cdc_enable_db  
    GO  


Now to enable CDC on the table, we need to do the stored procedure "sys.sp_cdc_enable_table" with its input parameters as given below
    USE [CDC_TEST]  
    EXEC sys.sp_cdc_enable_table     
      @source_schema = 'dbo', -- Is the name of the schema to which the source table belongs.  
      @source_name = 'Customer', -- Is the name of the source table on which to enable change data capture    
      @role_name     = NULL -- Is the name of the database role used to gate access to change data, we can mention null if we want all the users having access to the database to view the CDC data  


Once the stored procedure executes successfully some table with schema "cdc" is generated under the System Tables folder.


The tables include the following
    cdc.captured_columns table that contains the list of captured columns
    cdc.change_tables table that contains the list of tables that are enabled for capture
    cdc.ddl_history table that records the history of all the DDL changes since capture data enabled
    cdc.index_columns table that contains all the indexes that are associated with change table
    cdc.lsn_time_mapping table that is used to map the LSN number with the time and finally one change table for each CDC enabled table that is used to capture the DML changes on the source table
    cdc.dbo_Customer_CT table that contains the actual data before any DML operation is executed and some additional metadata like the operation, affected columns count, etc. The name of the table may vary depending on the name of the primary table on which the CDC is applied, but in general, it will be "NameOfSchema_TableName_CT" hence the name "dbo_Customer_CT".

With the tables, two SQL Agent Jobs are also created for given below
    cdc.CDC_TEST_capture job is responsible to push the DML changes into change tables
    cdc.CDC_TEST_cleanup job is responsible to clean up the records from the change tables. This job is created automatically by SQL Server to minimize the number of records in the change tables, failing this job execution will be resulting in a larger change table.

Detect Changes
So now that we have implemented CDC on the database and table, let's perform some DML operations given below
    INSERT INTO [dbo].[Customer]  
               ([CustName]  
               ,[CustMobNo]  
               ,[Address]  
               ,[SubAreaId])  
         VALUES  
               ('test cdc'  
               ,'9876543215'  
               ,'Home Address'  
               ,1)  
      
    UPDATE [dbo].[Customer]  
    SET  
        CustName = 'test cdc 2',  
        CustMobNo = '9876543216',   
        [Address] = 'Address updated',  
        SubAreaId = 2   
    WHERE CustId = 1  
      
    DELETE [dbo].[Customer] WHERE CustId = 1  


The results of the executed DML queries are populated in the table [cdc].[dbo_Customer_CT] table as shown in the image below.


The first five columns are metadata to the rows updated. The column '__$operation' is of significance as the column is used to identify the DML operation.
    __$operation = 1 denotes deleted rows
    __$operation = 2 denotes new inserted rows
    __$operation = 3 denotes row before the updation
    __$operation = 4 denotes row after the updation

But quering the cdc table is not advisable by Microsoft, hence we have to use table valued functions that were created while enabling CDC on the table. In this case, we have a table valued function called "fn_cdc_get_all_changes_dbo_Customer" which can be used as given below
    DECLARE @from_lsn binary (10), @to_lsn binary (10)  
    SET @from_lsn = sys.fn_cdc_get_min_lsn('dbo_Customer') -- scheme name with table  
    SET @to_lsn = sys.fn_cdc_get_max_lsn()  
      
    SELECT *  
    FROM cdc.[fn_cdc_get_all_changes_dbo_Customer](@from_lsn, @to_lsn, 'all')  
    ORDER BY __$seqval  


Disable CDC
Once the CDC is enabled we cannot change the Primary Key of the table, truncate the table, and in case we have to add or remove a column the corresponding CD table doesn't get updated and hence won't detect any changes for the newly added column. In these cases, we will have to disable the CDC make appropriate changes and re-enable CDC on the table. Below is a stored procedure that can be used to remove CDC on a table.
    EXEC sys.sp_cdc_disable_table     
      @source_schema = 'dbo' ,     
      @source_name = 'Customer',  
      @capture_instance ='all'  

Note

  • The SQL Agent should be up and running all the time
  • cdc_jobs configurations are very important to set correctly.Overestimating/underestimating the configurations will have a detrimental impact on you application performance. You may need to genuinely configure as per your workload, a performance test can be carried out as per your workload to reach out your optimal values
  • Cleanup job is scheduled by default to run at 02:00 AM every day
  • Capture job is scheduled as “Start automatically when SQL Server Agent starts”. As it uses continuous parameter further, you may not need to make any change for “Schedule type”.

HostForLIFEASP.NET SQL Server Hosting



SQL Server Hosting - HostForLIFE :: Handle JSON Data In SQL

clock March 15, 2021 07:25 by author Peter

In today's development world we are exposed to both SQL and No-SQL database operations. At some point we may need to map a JSON data to SQL table . Here is an article to parse JSON in SQL. Let's start.
 
Suppose front-end /client sends JSON data in string format. Define a variable to store the JSON string as below.
    DECLARE @json nvarchar(max) ;  
    //input from client  
    set @json = N'[{ \"mid\": \"/m/01dvt1\", \"description\": \"Joint\", \"score\": 0.975906968,   
    \"topicality\": 0.975906968 }, { \"mid\": \"/m/0dzf4\", \"description\": \"Arm\", \"score\": 0.9426941, \"topicality\": 0.9426941 },   
    { \"mid\": \"/m/01ssh5\", \"description\": \"Shoulder\", \"score\": 0.936277151, \"topicality\": 0.936277151 },    
    { \"mid\": \"/m/035r7c\", \"description\": \"Leg\", \"score\": 0.925112, \"topicality\": 0.925112 },  
    { \"mid\": \"/m/01d40f\", \"description\": \"Dress\", \"score\": 0.920576453, \"topicality\": 0.920576453 },   
    { \"mid\": \"/m/02p0tk3\", \"description\": \"Human body\", \"score\": 0.8836405, \"topicality\": 0.8836405 },  
    { \"mid\": \"/m/062581\", \"description\": \"Sleeve\", \"score\": 0.8722252, \"topicality\": 0.8722252 },   
    { \"mid\": \"/m/019swr\", \"description\": \"Knee\", \"score\": 0.8650081, \"topicality\": 0.8650081 },  
    { \"mid\": \"/m/01j04m\", \"description\": \"Thigh\", \"score\": 0.858148634, \"topicality\": 0.858148634 },    
    { \"mid\": \"/m/01vm1p\", \"description\": \"Elbow\", \"score\": 0.834722638, \"topicality\": 0.834722638 }]';  


This is a JSON data in string format (equivalent to JSON.stringify() in JavaScript). Before proceeding to map data, first we should generate a valid JSON object out of the string input. We can do that by replacing "/" and "\" with space. Here is the code.
    set @json = REPLACE(@json,'\','');  
    set @json = REPLACE(@json,'/','');  


SQL has in-built method "OPENJSON" to convert a JSON object to row and column format. Let's see the output.
    select * from OPENJSON ( @json ) ;   

Output


Here "type" refers to data type of JSON data. For more info about OPENJSON, here is a link MSDN.
 
Now, we have to parse value column into SQL column. We can do so by using below query.
    select *  FROM    
     OPENJSON ( @json )    
    WITH (  
      mid varchar(10) '$.mid',  
      description varchar(max) '$.description',  
      score nvarchar(20) '$.score',  
      topicality float '$.topicality'  
    )  
    select @desc as Description  


Here $.mid,$.description, $.score and $.topicality are JSON properties. Based on your JSON property name , you need to replace it.
 
Output


We can copy these records to a SQL table as below.

    insert into jsondata (mid,description,score,topicality)   
    select mid,description,score,topicality  
    FROM    
     OPENJSON ( @json )    
    WITH (  
      mid varchar(10) '$.mid',  
      description varchar(max) '$.description',  
      score nvarchar(20) '$.score',  
      topicality float '$.topicality'   
    );  
    select * from jsondata;  

Here, i am trying to insert JSON data to an existing SQL table records.
 
Output

We can map No-SQL data to a SQL data table using SQL predefined method , that is "OPENJSON". I hope this article is helpful for you. Thanks you for spending time to read it. I am always open for any input or suggestion. Thank you!

HostForLIFEASP.NET SQL Server 2019 Hosting



SQL Server 2019 Hosting - HostForLIFEASP.NET :: RAND Function In SQL

clock March 12, 2021 11:17 by author Peter

In this blog, we will learn about how to use RAND() function based on our business requirement.
So here we will cover the following things:
    Definition
    Random Decimal Number
    Random Integer Range
    Real example
    Summary

Definition
As the name suggests the RAND function can be used to return any random number which can be decimal or integer.
 
The syntax for the RAND function would be:
    SELECT RAND()  

This function will return any random number like this image.

 
We can create any random decimal number between two given numbers, so for that, we can use this formula.
    SELECT RAND()*(b-a)+a;  

Here in this formula, you will use b for greater number and a for a lower number, so this formula will return a number between this.
 
Random Integer Range
 
We can create any random integer number between two given numbers, so for that, we can use this formula.
    SELECT FLOOR(RAND()*(b-a+1))+a;  

Here in this formula, you will use b for greater number and a for a lower number, so this formula will return a number between this.
 
Note
This RAND() function we can use on the following SQL version, SQL Server 2017, SQL Server 2016, SQL Server 2014, SQL Server 2012, SQL Server 2008 R2, SQL Server 2008, SQL Server 2005.
 
Real example
 
Now let's see the real example on this function, Here I am using multiple examples so that you can differentiate among them.
 
--//---1- Random Decimal-----//

    SELECT RAND()  
    SELECT RAND(7);  
    SELECT RAND(-7);  
    SELECT RAND()*(7-1)+1;  
    SELECT RAND(8)*(7-1)+1;  
    SELECT RAND(-4)*(7-1)+1;  

--//---2- Random Integer-----//


    SELECT FLOOR(RAND()*(8-4+1))+4;  
    SELECT FLOOR(RAND(6)*(8-5+1))+5;  
    SELECT FLOOR(RAND(123456789)*(10-5+1))+5;  


See this image for result,

HostForLIFEASP.NET SQL Server 2019 Hosting



SQL Server Hosting - HostForLIFEASP.NET :: Deploy SSIS Package To SQL Server

clock March 8, 2021 06:41 by author Peter

Before going next, first make sure you have SQL Server Integration Services installed. Open Visual Studio SSIS package project and right click on project and hit Deploy to deploy all packages, if you want to install individual packages then right click on the package and hit deploy.


First window is introduction windows click Next button.

We have two deployment targets,

    SSIS in SQL Server
    SSIS in Azure Data Factory

As in this article we are going to deploy on SQL Server, so we must select SSIS in SQL Server and click Next.


Select destination, Enter SQL Server name, Authentication type, Username and password and click Connect. Once connect Browse project folder path if available, if not available create a directory in SSISDB and create a new project and hit Next.

You can review all given changes and hit Deploy.

You can check the deployment result in last windows. If all results are passed, then click close.


Above screenshot shows that all results are passed and successfully deployed.


Go to SQL Server and expand Integration Services Catalogs and go to SSISDB the you can see the created folder and project and deployed packages there.

HostForLIFEASP.NET SQL Server Hosting



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in