European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

SQL Server 2016 Hosting - HostForLIFE.eu :: SQL UNIQUE Constraint

clock March 18, 2020 12:55 by author Peter

The Unique constraint statement ensures that all values in a column are different. Both the Unique and Primary Key constraints provide a guarantee for uniqueness for a column or set of columns. A Primary key constraint automatically has a unique constraint in SQL. However, you can have many unique constraints per table, but only one primary key constraint per table.

 
We can create a unique constraint in SQL Server 2019 (15.x) by using SQL server management studio or SQL to ensure no duplicate values are entered in specific columns that do not participate in a primary key. Creating a unique constraint automatically creates a corresponding unique index in SQL.  
 
SQL Server Unique constraints allow you to ensure that the data stored in a column, or a group of columns, is unique among the rows in a table. 
 
Syntax
    CREATE TABLE EmployeeName  (       
     EmpID int NOT NULL UNIQUE,       
      EmpName varchar(255) NOT NULL,        
    );    


The above query creates a table with the name "EmployeeName"  and column name EmpID which is both Not Null and Unique(i.e we cannot have empty or duplicate data) and EmpName
 
Using Unique constraint on create table statement
 
Syntax
    CREATE TABLE Employee  (     
     EmpID int NOT NULL UNIQUE,     
      EmpName varchar(255) NOT NULL,     
      EmpFirstName varchar(255),     
      EmpLastname varchar(255),     
      EmpAge int     
    );    


The above query created a table with the name "Employee" and the first column name EmpId is Not NULL and UNIQUE, other column name EmpName,EmpFirstName,EmpLastname, EmpAge
 
Using Unique constraint on alter table
 
Syntax
    ALTER TABLE Employee      
    ADD UNIQUE (EmpID);    


 The above query with add a column EmpID and make it UNIQUE.
 
"Unique" is used to signify a Unique constraint, and also to define a unique name a Unique constraint,on multiple columns.
 
Syntax
    ALTER TABLE Employee      
    ADD CONSTRAINT UC_Employee UNIQUE (EmpID,EmpLastName);    

The above query will add EmpID and EmpLastName columns into the UC_Employee table, with the Unique Constraint
 
Using Drop a Unique constraint statement
 
Use the following example to drop a Unique constraint:
 
Syntax 
    ALTER TABLE Employee      
    DROP CONSTRAINT UC_Employee;    

The above query will remove the "Unique" Constrain from the Employee table.
 
Using SQL Server Management Studio in Unique Constraint
 
To create a unique constraint statement:
    In Object Explorer, right-click the table to which you want to add a unique constraint, and click Design.
    On the Table Designer menu, click Indexes/Keys.
    In the Indexes/Keys dialog box, click Add.
    In the grid under General, click Type and choose Unique Key from the drop-down list box to the right of the property.
    On the File menu, click Save table name.

Using unique constraint in SQL
 
To create a unique constraint,

    In Object Explorer, connect to an instance of Database Engine.
    On the Standard bar, click New Query.
    Copy and paste the following example into the query window and click Execute. This example creates the table SampleDetails and creates a unique constraint on the column TransactionID.

Syntax
    USE sample ;       
    GO       
    CREATE TABLE SampleDetails       
     (       
       TransactionID int NOT NULL,        
       CONSTRAINT AK_TransactionID UNIQUE(TransactionID)        
    );        
    GO      

The above query will create a table SampleDetails in the sample database, with TransactionID and the AK_TransactionID constraint which makes TransactionID unique
 
To create a unique constraint on an existing table
    In Object Explorer, connect to an instance of Database Engine.
    On the Standard bar, click New Query.
    Copy and paste the following example into the query window and click Execute. The example creates a unique constraint on the columns PasswordHash and PasswordSalt in the table Person.Password. 

Syntax
    USE sample         
    GO       
    ALTER TABLE Person.Password        
    ADD CONSTRAINT AK_Password UNIQUE (PasswordHash, PasswordSalt);        
    GO  
     

The above query appends the table Person.Password in the sample database , with the AK_Password CONSTRAINT which makes PasswordHash amd PasswordSalt unique.
 
To create a unique constraint in a new table
    In Object Explorer, connect to an instance of Database Engine.
    On the Standard bar, click New Query.
    Copy and paste the following example into the query window and click Execute. The example creates a table and defines a unique constraint on the column TransactionID.

Syntax 
    USE sample;       
    GO       
    CREATE TABLE Production.TransactionHistoryArchive2       
    (       
       TransactionID int NOT NULL,       
       CONSTRAINT AK_TransactionID UNIQUE(TransactionID)       
    );       
    GO   
   

The above query uses the sample database and creates a table with the name "Production.TransactionHistoryArchive2" and column name TransactionID and CONSTRAINT AK_TransactionID UNIQUE column name is TransactionID.

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



SQL Server Hosting - HostForLIFE.eu :: INTERSECTION in SQL Server

clock February 26, 2020 11:06 by author Peter

Sometimes, we have a scenario when we want to get a common set of rows from 2 different result sets. For example, we have 2 queries and both returns employees record. If we want to find who all employees are present in both result sets, that time we can use INTERSECT to get the result. Below is the graphical representation of how INTERSECT works.

If you see in the above graphical representation, the left 2 circles have the B and C letters in common. The left side picture shows how the 2 circles have B and C letters common which is nothing but an intersection.

Now let's see how it works in the database.

So we are going to create 2 tables, EmpTable and ManagerEmp and then we will insert records in them.
CREATE TABLE EmpTable(EmpName   VARCHAR(50),City    VARCHAR(50),Title   VARCHAR(50)) 
CREATE TABLE ManagerTable(EmpName   VARCHAR(50),City    VARCHAR(50),Title   VARCHAR(50)) 
 
INSERT INTO EmpTable 
SELECT EmpName='John',City='Stamford',Title='Operator' 
 
INSERT INTO EmpTable 
SELECT EmpName='Luis',City='Danbury',Title='Electrical Engineer' 
 
INSERT INTO EmpTable 
SELECT EmpName='Smith',City='Wilton',Title='Driver' 
 
 
INSERT INTO ManagerTable 
SELECT EmpName='Mike',City='Wilton',Title='Driver' 
 
INSERT INTO ManagerTable 
SELECT EmpName='Smith',City='Wilton',Title='Driver' 
 
INSERT INTO ManagerTable 
SELECT EmpName='Jonathan',City='Armonk',Title='Accountant' 
 
INSERT INTO ManagerTable 
SELECT EmpName='Warner',City='Stamford',Title='Customer Service' 
 
INSERT INTO ManagerTable 
SELECT EmpName='Luis',City='Danbury',Title='Electrical Engineer' 


Now run below the query to find out the common employees in both tables.
SELECT * FROM EmpTable 
 
SELECT * FROM ManagerTable 
 
-- INTERSECTION 
SELECT * FROM EmpTable 
INTERSECT 
SELECT * FROM ManagerTable
 

Here is the output.

If you see below, in both the "EmpTable" and the "ManagerTable" tables, Luis and Smith both are employees. To join these 2 queries with INTERSECT, it gave these 2 names.

One thing is to remember here is both the tables/result sets should have the same columns and the same datatype for those columns, otherwise, it may give you "Conversion failed when converting...." if the data type does not match.

 



SQL Server 2016 Hosting - HostForLIFE.eu :: Collation Error In SQL Sever

clock February 12, 2020 11:46 by author Peter

ERROR - "Cannot resolve the collation conflict between SQL_Latin1_General_CP1_CI_AS and Latin1_General_CI_AS_KS_WS within the up to operation."

Don’t panic if you get this error while joining your tables. there's an easy way to solve this. It happens because of the different collation settings on 2 columns we are joining.

The first step is to figure out what are the two collations that have caused the conflicts.
Let us assume that collation has not been set at the column level and instead at the db level. Then, we've to execute two straightforward statements as below.
Statements

  • Select DATABASEPROPERTYYEX('DB1',N'Collation')
  • Select DATABASEPROPERTYYEX('DB2',N'Collation') 

One more thing to make a note of here is that if you are on SharePoint, you will get an error as following.

Latin_General_CI_AS_KS_WS.
 
If you are on any other database and use the default settings, you may get this SQL_Latin_General_CP1_CI_AS.

Now, we have to do something similar to CAST, called Collate (FOR Collation).

Refer to the example below.
      select * from Demo1.dbo.Employee emp 
    join Demo2.dbo.Details dt 
    on (emp.email =dt.email COLLATE SQL_Latin_General_CP1_CI_AS) 

HostForLIFE.eu SQL Server 2016 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.



SQL Server Hosting - HostForLIFE.eu :: Rethinking Sorting In SQL Server Stored Procedures

clock February 5, 2020 11:03 by author Peter

We know that sorting can be one of the most expensive things in an execution plan as shown below. However, we continue to do ORDER BYs repeatedly. Yes, I 100% agree that there is a need to sort a results set and that this should be done in the procedure for good reason, but my concern is having multiple sorts, erroneous sorts, and the sorts that can be done elsewhere. These are the ones that waste resources and can stifle performance.

Many of us writing procedures tend to write in code blocks. We write the SELECT, JOINS, FROMs and WHERES then immediately follow it up with and ORDER BY as a way to check result sets before moving onto the next block of code. I admit I do this almost every time. But what most developers do not do is remove unneeded ORDER BYs that are not required. This is very costly and can lead to suboptimal performance not only of your procedure but also for TEMPDB as this is where all sorting takes place.
Do you sort in your procedures that are used for data consumers like reports, ETL or an application? If you do, I ask, why are you sorting in the procedure and not in the consumer ? Many report end users will resort the data in Excel, or the report itself gives parameters for custom sorts or the data doesn’t need a sort at all. Why are you wasting resources on the SQL Server side just for it to be nullified? By removing unneeded sorts or performing the sort in the application tier you can have big performance gains. I would rather have a report, ETL process or application take the performance hit then a procedure.
 
Let’s look at one of the procedures that are available in AdventureWorks2016CPT3 called uspGetOrderTrackingBySalesOrderID. We will run it using the example execution in the code and then remove the ORDER BY, compile and rerun. We will be able to see clearly see the difference.
    USE [AdventureWorks2016CTP3] 
     
    GO 
     
    /****** Object:  StoredProcedure [dbo].[uspGetOrderTrackingBySalesOrderID]    Script Date: 1/28/2020 11:31:16 AM ******/ 
     
    SET ANSI_NULLS ON 
     
    GO 
    SET QUOTED_IDENTIFIER ON 
    GO 
     
    ALTER PROCEDURE [dbo].[uspGetOrderTrackingBySalesOrderID] 
     
       @SalesOrderID [int] NULL 
    AS 
    BEGIN 
    /* Example: 
     
          exec dbo.uspGetOrderTrackingBySalesOrderID 53498 
    */ 
       SET NOCOUNT ON; 
       SET STATISTICS IO, TIME ON 
     
       SELECT 
          ot.SalesOrderID, 
          ot.CarrierTrackingNumber, 
          ot.OrderTrackingID, 
          ot.TrackingEventID, 
          te.EventName, 
          ot.EventDetails, 
          ot.EventDateTime 
       FROM 
          Sales.OrderTracking ot, 
          Sales.TrackingEvent te 
       WHERE 
          ot.SalesOrderID = @SalesOrderID AND 
          ot.TrackingEventID = te.TrackingEventID 
       --ORDER BY 
       --   ot.SalesOrderID, 
       --   ot.TrackingEventID; 
    END; 

Plan with ORDER BY

Plan without ORDER BY and Query Store graph showing the difference in duration between the two. You can clearly see the performance improvement, and this was just one sort in a very simple procedure. Take a moment and consider the sorting that happens in your code. I’d ask that when writing store procedures, doing code reviews or performance tuning that you take a second to ask why the sorts are being done in the data tier and if they can be performed elsewhere. You can see get some performance gains not only in your code but in TEMPDB as well when sorting is reigned in.



SQL Server Hosting - HostForLIFE.eu :: SQL Server In Memory Table Indexes

clock January 29, 2020 10:49 by author Peter
Now that I have written about In-Memory Tables and Migrating to In-Memory tables, let’s look at indexes and how they are created and how they work within those tables. As you can imagine indexes, called memory optimized indexes are different for these types of tables, so let’s see just how different that are from regular tables. Before we dive into this subject it is VERY important to note the biggest differences.

First, ALL memory optimized indexes MUST be created when the table is created or migrated. You cannot add indexes in an existing table without dropping and recreating the table. Secondly, currently you can only have 8 indexes per table including your primary key. Remember that every table must have a primary key to enforce a secondary copy for a minimum of schema durability This means you can only really add 7 additional indexes so be sure to understand your workloads and plan indexing accordingly.

Third, Memory Optimized Indexes only exists in memory they are not persisted to disk and are not logged in the transaction logs. Therefore, this means they are also recreated upon database startup and do incur a performance hit as they are rebuilt.

Next, there is no such thing as key lookups against an In-Memory table, as all indexes are by nature a covering index. The index uses a pointer to the actual rows to get the needed fields instead of using a primary key like physical tables do. Therefore, these are much more efficient in returning the proper data.

Lastly, there also is no such thing as fragmentation for these indexes, since these are not read from disk. Unlike on disk indexes, these do not have a fixed page length. On disk index use physical page structures within the B-Tree, determining how much of the page should be filled is what the Fill Factor does. Since this is not a requirement fragmentation does not exist.

Ok now that we made it through all of that, let’s look at the types of indexes you can create and gain an understanding of what they are and how they are created.

Nonclustered HASH Index
This index is used to access the In-Memory version of the table, called a Hash. These are great for predicates that are singleton lookups and not ranges of values. These are optimized for seeks of equality values. For example, WHERE Name = ‘Joe’. Something to keep in mind when determining what to include in your indexes is this; if your query has two or more fields as your predicate and your index only consists of one of those fields, you will get a scan. It will not seek on that one field that was included.

Understanding your workloads and indexing on the appropriate fields (or a combinations thereof) is important since you are limited to only 7 additional indexes. Given that this In-Memory OLTP is mainly focused on heavy insert/update workloads, and less so reading, this should be less of a concern.

These types of indexes are highly optimized and do not work very well if there are a lot of duplicate values in an index, the more unique your values better the index performance gains you will get. It is always important to know your data. When it comes to these indexes knowing your memory consumption plays a part. The hash index type is a fixed length and consume a fixed amount of memory determined upon creation. The amount of memory is determined by the Bucket Count value. It is extremely important to make sure this value is as accurate as possible. Right sizing this number can make or break your performance, too low of a number according to Microsoft “can significantly impact workload performance and recovery time of a database.

Look for my upcoming blog on determining bucket counts for more information. Meanwhile you can learn more about hash indexes at docs.microsoft.

Using T-SQL (both methods give the same result)
Example One (Note the index comes after the table fields)

CREATE TABLE [Sales]   
([ProductKey] INT NOT NULL,  
[OrderDateKey] [intNOT NULL,  
INDEX IDX_ProductKey HASH ([ProductKey]) WITH (BUCKET_COUNT = 100))  
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY)  

Example Two (Note the index comes after the field)
CREATE TABLE [Sales]  
([ProductKey] INT NOT NULL INDEX IDX_ProductKey HASH WITH (BUCKET_COUNT = 100),  
[OrderDateKey] [intNOT NULL)  
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY)  

Nonclustered Index
These are also used to access the In-Memory version of the table however, these are optimized for range values such as less than and equal to, inequality predicates and sorts orders. Examples are WHERE DATE between ‘20190101’ and ‘20191231’ and WHERE DATE <> ‘20191231’. These indexes do not require a bucket count or fixed memory amount. The memory consumed by these indexes are determined by the actual row counts and size of the indexed key columns which makes it a simpler to create.

Moreover, in contrast to hash indexes which needs all fields required for your predicate to be part of your index to get a seek, these do not. If your predicates have more than one field and your index has that one of those as its leading index key value, then you can still attain a seek.

Using T-SQL (both methods give the same result)
Example One (Note the index comes after the table fields)

CREATE TABLE [Sales]   
([ProductKey] INT NOT NULL,  
[OrderDateKey] [intNOT NULL,  
INDEX IDX_ProductKey ([ProductKey]))  
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY)

Example Two (Note the index comes after the field)

CREATE TABLE [Sales]  
([ProductKey] INT NOT NULL INDEX IDX_ProductKey,  
[OrderDateKey] [intNOT NULL)  
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY) 

Determining which index type to use can be tricky but Microsoft has given us a great guide in the below chart. As you can see there some key differences to how In- Memory table indexes, memory optimized indexes, work compared to the normal disk indexes we are used to. Like with any other table design it is important to consider your index needs before you embark on creating or migrating to memory optimized tables. You’ll be happy you did.



SQL Server Hosting - HostForLIFE.eu :: SQL Comments Statement

clock January 20, 2020 11:15 by author Peter

SQL Comments statement can make your application easier for you to read and maintain. For example, we can include a comment in a statement that describes the purpose of the statement within your application with the exception of hints, comments within SQL. The statement does not affect the statement execution. Please refer to using hints on using this particular form of comment statement. 
 
A comment can appear between any keywords, parameters, or punctuation marks in a statement. You can include a comment in a statement in two ways:

  • Begin the comment with a slash and an asterisk (/*). Proceed with the text of the comment. This text can span multiple lines.
  • End the comment with an asterisk and a slash (*/). The opening and terminating characters need not be separated from the text by a space or a line break.
  • Begin the comment with -- (two hyphens). Proceed with the text of the comment. This text cannot extend to a new line. End the comment with a line break.

Some of the tools used to enter SQL have additional restrictions. For example, if you are using SQL*plus, by default you cannot have a blank line inside a multiline comment.
 
For more information, please refer to the documentation for the tool you use as an interface to the database. A SQL statement can contain multiple comments of both styles. The text of a comment can contain any printable characters in your database character set.
 
The comment statement indicates the user-provided text. Comments can be inserted on a separate line, nested at the end of a SQL command line, or within a SQL statement. The server does not evaluate the comment. 
SQL Comment uses the two hyphens (--) for single-line or nested comments. Comments inserted with -- are terminated by a new line, which is specified with a carriage return. Character (U+000A), line feed character (U+000D), or a combination of the two in SQL comments.
 
There is no maximum length for comments. The following table lists the keyboard shortcuts that you can use to comment or uncomment text.
 
Syntax
    -- text_of_comment    

Examples
The following example uses the -- commenting characters.
 
Syntax
    -- Choose the sample database.     
    USE sample;     
    GO     
    -- Choose all columns and all rows from the Address table.     
    SELECT *     
    FROM OrderDetails     
    ORDER BY OrderId  ASC; -- We do not have to specify ASC because      
    -- that is the default.     


SQL Single Line Comments
Single line comments start with --. Any text between -- and the end of the line will be ignored (will not be executed). The following example uses a single-line comment as an explanation.
 
Syntax 
   --Select all:   
    SELECT * FROM OrderDetails ; 


The following example uses a single-line comment to ignore the end of a line.
 
Syntax
    SELECT * FROM OrderDetails -- WHERE OrderName='Coffee';  

The following example uses a single-line comment to ignore a statement.
 
Syntax
    --SELECT * FROM OrderDetails;   
    SELECT * FROM OrderDetails ;  


SQL Multi-line Comments
SQL Multi-line comments start with /* and end with */. Any text between /* and */ will be ignored. The following example uses a multi-line comment as an explanation.
 
Syntax 
    /*Select all the columns   
    of all the records   
    in the OrderDetails table:*/   
    SELECT * FROM OrderDetails;  


 The following example uses a multi-line comment to ignore many statements.
 
Syntax
    /*SELECT * FROM Customers;   
    SELECT * FROM Products;   
    SELECT * FROM Orders;   
    SELECT * FROM Categories;*/   
    SELECT * FROM OrderDetails;  


To ignore just a part of a statement, also use the /* */ comment. The following example uses a comment to ignore part of a line.
 
Syntax
    SELECT CustomerName, /*City,*/ Country FROM Customers;  

The following example uses a comment to ignore part of a statement
 
Syntax
    SELECT * FROM OrderDetails WHERE (OrderName LIKE 'L%'   
    OR OrderName LIKE 'R%' /*OR OrderName  LIKE 'S%'   
    OR OrderName LIKE 'T%'*/ OR OrderName LIKE 'W%')   
    AND OrderName ='Mango'   
    ORDER BY OrderrAddress;  


Summary
In this article, you learned how to use a SQL Comments statement with various options.

 



SQL Server Hosting - HostForLIFE.eu :: Transaction Explained in SQL Server

clock December 18, 2019 11:56 by author Peter

 In this blog, I will explain the SQL Transaction. A transaction is a logical unit of work. Each transaction begins with a specific task and ends with all tasks in a group successfully complete. If any tasks fail, it means that the transaction fails. All steps must be committed (transaction Success) or rolled back (transaction failure). A transaction begins to initiate the execution of the SQL statement. A transaction must be committed or rolled back. It is separate operations succeed is transaction succeed and committed to the database. If any separate operation fails means transaction failure and must be undone rolled back. The following are the properties of a transaction.
 
1. Atomicity
2. Consistency
3. Isolation
4. Durability

Transaction Process

  • BEGIN TRANSACTION - Starts the transaction
  • ROLLBACK - If an error occurred, reverts the existing transaction changes
  • COMMIT - No error occurred, then it saves all transaction states
  • SAVEPOINT - Rollback particular named transaction

Example
    BEGIN TRANSACTION T1 
    UPDATE TB_NAME SET FIRST_NAME ='R' WHERE ID=1 
    COMMIT; 
     
    BEGIN TRANSACTION T1 
    UPDATE TB_NAME SET FIRST_NAME ='R' WHERE ID=1 
    ROLLBACK; 
     
    SAVEPOINT T1 
    UPDATE TB_NAME SET FIRST_NAME ='R' WHERE ID=1 
    ROLLBACK T1; 

HostForLIFE.eu SQL Server Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes. We have customers from around the globe, spread across every continent. We serve the hosting needs of the business and professional, government and nonprofit, entertainment and personal use market segments.

 



SQL Server 2019 Hosting - HostForLIFE.eu :: Creating a custom sequence in MS-SQL Server

clock December 13, 2019 11:56 by author Peter

A sequence object in MS-SQL Server is designated to define and get only integer values, such as int, bigint, smallint, tinyint. However, if we want to generate sequence value(s) that are alpha-numeric, then we can define a Stored Procedure that can combine to generate an alpha-numeric combination of sequence values. This blog gives a complete idea of how this can be implemented.

CREATE DATABASE sampdb1

use sampdb1

--First Create a sequence object s3 which will generate numbers from 1 to 5 and cycles

CREATE SEQUENCE s3

START WITH 1

INCREMENT BY 1

MINVALUE 1

MAXVALUE 5

CYCLE

--Ensuring that the sequence object is defined properly and generating defined values.

SELECT NEXT VALUE FOR s3

--Defining a Stored Procedure that generates a Custom-sequence of values. This stored procedure is going to just get the generated alpha-numeric combination of the sequence.

CREATE PROCEDURE genSeqVals AS

Begin

DECLARE @n as int, @msg varchar(4)

SELECT @n=NEXT VALUE FOR s3

PRINT 'A'+cast(@n as varchar(2))

End

--Execute the following code and check.

Exec genSeqVals

--Defining another Stored Procedure that generates a Custom-sequence of values. This stored procedure returns the alpha-numeric combination of the sequence value that is generated using an OUTPUT parameter.

CREATE PROCEDURE getSeqVals(@res varchar(4) OUTPUT) AS

Begin

DECLARE @n as int, @msg varchar(4)

SELECT @n=NEXT VALUE FOR s3

SET @res = 'A'+cast(@n as varchar(2))

End

--Execute the following code and check.

DECLARE @seqnum varchar(4)

EXECUTE getSeqVals @seqnum output

print @seqnum


This is one way in which sequence objects can be custom-implemented as per the requirement. I hope that the above lines of code have given you deeper insight into T-SQL for custom implementation. Happy coding!

 



SQL Server 2019 Hosting - HostForLIFE.eu :: Drop Indexes In Views In SQL Server

clock December 3, 2019 11:51 by author Peter

In this article, I am going to tell about the dropping of indexes in View in SQL Server. Views are virtual tables in which data from one or more tables gives the result set as our SQL table does with rows and columns. A View doesn’t store the data permanently in the database and at the time of execution, only its result set gets determined. When a View contains a large amount of rows and has complex logic in it then we can create an index on a View to improve the query performance. A View consists of a Unique Clustered Index and it is stored in the database as the clustered index does. Now, let’s run a few scenarios to check when Clustered index which is created on a View gets dropped automatically.

First, I will create a table on which I will run those scenarios.
CREATE TABLE[dbo]. [Customer] 
    ( 
        [CustomerID][int] IDENTITY(1, 1) NOT NULL, [CustomerName][varchar](50) NOT NULL, [CEO][varchar](40) NULL, [Phone][varchar](20) NOT NULL PRIMARY KEY CLUSTERED( 
            [CustomerID] ASC)) 
GO 


Now, I will create a View which will use this Customer table.
-- Create view 
Create VIEW vw_customer 
 WITH SCHEMABINDING  
AS  
 SELECT  CustomerID, CustomerName, CEO 
  from dbo.Customer 
  GO 


Here in the definition of View, I have used WITH SCHEMABINDING which is necessary for creating an index on a View. This option simulates that we cannot delete any of the base table used in the View and in order to make any changes, first, we need to drop or alter the View.

Also, all the table references in a View should have two part naming convention (schemaname.tablename) as we have in vw_Customer view (dbo.Customer).

Now, I will create an index on our View.


Scenario 1

Create index IX_CustomerID  
 ON vw_customer (CustomerID); 
 GO 


Error returned: Cannot create index on view 'vw_customer'. It does not have a unique clustered index. On Views, the first index must be a unique clustered index, so this will throw the error.


So, the first index on View should be UNIQUE CLUSTERD INDEX, else it will throw an error.

Scenario 2
Create unique clustered index IX_CustomerID  
 ON vw_customer (CustomerID) 
 GO 

Now, our Indexed View is created having a clustered index on it. Now, I want to alter my View and add one more column GETDATE() as CurrentDate  in the View definition and alter the View.

Now, alter the View after Scenario 2.

Scenario 3
ALTER  VIEW vw_customer 
 WITH SCHEMABINDING  
AS  
 SELECT  CustomerID, CustomerName , GETDATE() as CurrentDate 
  from dbo.Customer 
GO 

Now, as I have altered the View, I want to create another index on column CustomerName which will be a NonClustered index.

Scenario 4
Create index IX_CustomerName  
 ON vw_customer (CustomerName); 
GO 


Again, I get an error: Cannot create index on view 'vw_customer'. It does not have a unique clustered index.

As we have already created Unique Clustered Index on View, still it gives an error. The interesting thing is that after updating the View, the index that was created in Scenario 2 is dropped and the code will throw the same error as in Scenario 1.

As mentioned here,
"ALTER VIEW can be applied to indexed views; however, ALTER VIEW unconditionally drops all indexes on the view."
Hope you will like this post.

European SQL 2019 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.

 



SQL Server 2019 Hosting - HostForLIFE.eu :: SQL Constraints Explained

clock November 12, 2019 10:22 by author Peter

In this blog, I will explain SQL Constraints. They are used to specify rules for the data in a table. The following SQL Constraints are commonly used (Primary key, foreign key, unique key, Composite key, null, Not Null, check).

Primary Key

The Primary key must contain be a unique value. It is the primary column and can’t have the null value. It uniquely identifies each row in a database table.
 
Syntax
    CREATE TABLE TB_NAME(Column1 datatype,column2 datatype PRIMARY KEY(Column1)) 

Foreign Key
1. Foreign key always refers to the primary key column.
2. Foreign key accepted to duplicate value.
 
Syntax
    CREATE TABLE TB_NAME(column1 datatype FOREIGN KEY REFERENCES(primary_key column_name),cloumn2 datatype) 

Unique Key
 
The unique key is the same as the primary key, but one row is accepted for the null value.
 
Syntax
    CREATE TABLE TB_NAME(Column_name datatatype UNIQUE,column_name2 datatype) 

Composite key
A composite key is a set of multiple keys that, together, uniquely identifies each record
 
Syntax
    CREATE TABLE TB_NAME(Column1 datatype,column2 datatype PRIMARY KEY(Column1,column2)) 

Not Null
Forces a column not to accept NULL values
 
Syntax
    CREATE TABLE TB_NAME(Column1 datatype,column2 datatype NOT NULL) 

Check
The CHECK constraint is used to limit the value range that can be placed in a column.
 
Syntax
    CREATE TABLE TB_NAME(MARKS INT CHECK(MARKS<=100))



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in