European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

European SQL Server 2019 Hosting :: Order of SQL Execution

clock April 17, 2023 07:27 by author Peter

SQL is a widely used programming language for managing and manipulating relational databases. SQL statements are executed in a specific order to produce the desired results. Understanding the order of SQL execution is crucial for developing efficient and effective SQL queries. In this article, we will discuss the order of SQL execution with clear examples.

SQL execution order can be divided into three major parts.
    FROM and JOIN
    WHERE, GROUP BY, and HAVING
    SELECT and ORDER BY


Let’s explore each of these parts in more detail.

FROM and JOIN in SQL
The first step in SQL execution is to retrieve data from one or more tables using the FROM and JOIN clauses. The FROM clause specifies the tables from which data is retrieved, while the JOIN clause combines data from multiple tables.

For example, consider the following SQL query,
SELECT *
FROM orders
JOIN customers ON orders.customer_id = customers.customer_id

In this query, the FROM clause retrieves data from the “orders” table, and the JOIN clause combines data from the “customers” table based on the “customer_id” column.
WHERE, GROUP BY, and HAVING in SQL

After retrieving data from one or more tables, the next step is to filter the data based on certain criteria using the WHERE clause. The WHERE clause is used to specify conditions that must be met by the data being retrieved.

For example, consider the following SQL query,
SELECT *
FROM orders
JOIN customers ON orders.customer_id = customers.customer_id
WHERE customers.country = 'USA'


In this query, the WHERE clause filters the data only to retrieve orders made by customers from the United States.

GROUP BY in SQL
This clause is used to group the retrieved data based on one or more columns. The HAVING clause filters the grouped data based on certain criteria.

For example, consider the following SQL query,
SELECT COUNT(*) AS order_count, customers.country
FROM orders
JOIN customers ON orders.customer_id = customers.customer_id
GROUP BY customers.country
HAVING COUNT(*) > 100

In this query, the GROUP BY clause groups the retrieved data by country, and the HAVING clause filters the grouped data to only retrieve countries with more than 100 orders.

SELECT and ORDER BY in SQL
The final step in SQL execution is to select the columns to be displayed using the SELECT clause. The SELECT clause specifies the columns to be displayed and can also include aggregate functions to perform calculations on the retrieved data.

For example, consider the following SQL query,
SELECT customers.customer_id, customers.name, COUNT(*) AS order_count
FROM orders
JOIN customers ON orders.customer_id = customers.customer_id
GROUP BY customers.customer_id, customers.name
ORDER BY order_count DESC

In this query, the SELECT clause specifies the customer ID, name, and the number of orders they have made. The ORDER BY clause orders the results by the number of orders in descending order.

Understanding the order of SQL execution is crucial for developing efficient and effective SQL queries. The FROM and JOIN clauses retrieve data from one or more tables, the WHERE, GROUP BY, and HAVING clauses filter and group the retrieved data, and the SELECT and ORDER BY clauses select and order the columns to be displayed. By following this order of execution, you can write complex SQL queries with ease and efficiency.

HostForLIFEASP.NET SQL Server 2019 Hosting



European SQL Server 2019 Hosting :: Synchronizing Databases Using Microsoft Sync Framework

clock April 14, 2023 08:57 by author Peter

Microsoft Sync Framework is a powerful tool for synchronizing data across multiple databases or devices. You'll need to write some C# code that defines the synchronization logic to use it. Here's an example of using the Sync Framework to synchronize two SQL Server databases.

First, you'll need to add a reference to the Sync Framework assemblies in your C# project. You can do this by right-clicking on your project in Visual Studio, selecting "Add Reference", and then browsing to the location of the Sync Framework assemblies (usually C:\Program Files\Microsoft Sync Framework\2.0\Runtime).

Second, you'll need to define the databases you want to synchronize. In this example, we'll assume you have a "local" SQL Server database on your computer and a "remote" database on a server somewhere. Here's some sample code that defines these databases:

SqlSyncProvider localProvider = new SqlSyncProvider("LocalConnectionString");
SqlSyncProvider remoteProvider = new SqlSyncProvider("RemoteConnectionString");

In this code, "LocalConnectionString" and "RemoteConnectionString" should be replaced with the actual connection strings for your local and remote databases.

Next, you'll need to define the tables you want to synchronize. Here's some sample code that defines a single table called "Customers":
SyncTable customerTable = new SyncTable("Customers");
customerTable.SyncDirection = SyncDirection.Bidirectional;
customerTable.CreationOption = TableCreationOption.DropExistingOrCreateNewTable;
customerTable.Columns.AddRange(new string[] { "CustomerId", "FirstName", "LastName", "Email" });


This code creates a SyncTable object for the Customers table, sets the synchronization direction to bidirectional (meaning changes in either database will be synchronized to the other), and specifies that the table should be dropped and recreated if it already exists. Finally, it adds the four columns that we want to synchronize.

Now that we've defined our local and remote providers and Customers table, we can create a SyncOrchestrator object to handle the synchronization. Here's some sample code that does this:
SyncOrchestrator orchestrator = new SyncOrchestrator();
orchestrator.LocalProvider = localProvider;
orchestrator.RemoteProvider = remoteProvider;
orchestrator.Direction = SyncDirectionOrder.UploadAndDownload;
orchestrator.Synchronize();


This code creates a SyncOrchestrator object, sets the local and remote providers, sets the synchronization direction to upload and download (meaning changes in both databases will be synchronized), and then calls the Synchronize method to start the synchronization process.

This is just a simple example, and there are many more options and settings that you can use to customize the synchronization process. But hopefully, this gives you a good starting point for using the Microsoft Sync Framework in your C# applications.

HostForLIFEASP.NET SQL Server 2019 Hosting



European SQL Server 2019 Hosting :: How To Configure Transactional Replication In MS SQL Server?

clock April 11, 2023 07:51 by author Peter

What is Replication?
MS SQL Server database replication is a technology for copying, distributing, and synchronizing data and objects from one database to another continuously or at predetermined intervals.

Types of Replication in SQL Server
There are four types of replication in MS SQL Server,

  • Transactional replication
  • Peer-to-peer replication
  • Merge replication
  • Snapshot replication

Transactional replication
Transactional replication is a technique used in Microsoft SQL Server to distribute and synchronize data from one database to another. This type of replication is commonly used in scenarios where there is a need to keep multiple databases in sync, such as in a distributed or remote environment. This article will discuss how to configure MS SQL Server Transactional Replication.

Peer-to-Peer replication
Peer-Peer publication enables multi-master replication. The publisher streams transactions to all the peers in the topology. All peer nodes can read and write changes, which are propagated to all the nodes in the topology.

Merge replication

The Publisher and Subscribers can update the published data independently after the Subscribers receive an initial snapshot of the published data. Changes are merged periodically. Microsoft SQL Server Compact Edition can only subscribe to merge publications.

Snapshot replication
The Publisher sends a snapshot of the published data to Subscribers at scheduled intervals.

Here we will learn how to configure Transactional replication.

Step 1. Prepare the Environment
Before configuring transactional replication, you must ensure a stable and reliable SQL Server environment. This includes ensuring that you have the necessary permissions to create and manage replication and that the SQL Server instances are properly configured to support replication. You also need to ensure that the databases to be replicated are compatible with transactional replication.

Step 2. Create a Publication
The first step in configuring transactional replication is to create a publication. A publication is a set of one or more articles that define the data to be replicated. To create a publication, follow these steps,

  • Open SQL Server Management Studio (SSMS) and connect to the SQL Server instance.
  • Expand the Replication folder, right-click the Local Publications folder, and select New Publication.
  • The New Publication Wizard will start. Click Next to proceed.
  • Select the database that you want to replicate and click Next.
  • Select Transactional publication and click Next.
  • Select the articles that you want to replicate and click Next.
  • Configure the snapshot options, such as when to generate a new snapshot and where to store it. Click Next.
  • Configure the subscription options, such as the type of subscription and the security settings. Click Next.
  • Review the summary and click Finish

Step 3. Create a Subscription
Once you have created a publication, you need to create a subscription. A subscription is a copy of the publication on another SQL Server instance. To create a subscription, follow these steps,

  • Open SQL Server Management Studio (SSMS) and connect to the SQL Server instance where you want to create the subscription.
  • Expand the Replication folder, right-click the Local Subscriptions folder, and select New Subscription.
  • The New Subscription Wizard will start. Click Next to proceed.
  • Select the publication that you want to subscribe to and click Next.
  • Configure the subscription options, such as the type of subscription and the security settings. Click Next.
  • Configure the synchronization options, such as when and how to handle conflicts. Click Next.
  • Review the summary and click Finish.

Step 4. Configure Replication Agents
After creating a publication and subscription, you need to configure replication agents. Replication agents are processes that manage the replication process between the publisher and subscriber. There are two types of replication agents: the Snapshot Agent and the Log Reader Agent. To configure the agents, follow these steps,

  • Open SQL Server Management Studio (SSMS) and connect to the SQL Server instance.
  • Expand the Replication folder and right-click the Publication folder. Select Properties.
  • Select the Snapshot Agent tab and configure the options, such as the schedule and the delivery method. Click OK.
  • Select the Subscription folder and right-click the subscription. Select Properties.
  • Select the Subscription Options tab and configure the options, such as the schedule and the delivery method. Click OK.


Step 5. Start the Replication Process
Once you have completed the above steps, you must start the replication process. To start the replication process, follow these steps,

  • Open SQL Server Management Studio (SSMS) and connect to the SQL Server instance.
  • Expand the Replication folder and select the Publication folder.
  • Right-click the publication and select Start Synchronizing.
  • The Start Synchronizing Wizard will start. Click Next to proceed.
  • Review the synchronization settings and click Next.

    1. Select the subscription database and click Next.
    2. Select the subscription database and click Next.

The replication process will start, and you can monitor its progress in the Replication Monitor window in SQL Server Management Studio.

Configuring MS SQL Server Transactional Replication can be a complex task, but it is essential for maintaining data consistency in distributed or remote environments. By following the steps outlined in this article, you can configure transactional replication in a way that meets your specific requirements.

Before deploying it in a production environment, remember to test the replication process thoroughly. With proper planning and testing, transactional replication can be a powerful tool for ensuring data consistency across multiple databases.

In the above article, we will learn how to configure Transactional replication in SQL Server. Hope this will help the readers. Happy Coding!!!

HostForLIFEASP.NET SQL Server 2019 Hosting



European SQL Server 2019 Hosting :: What Is Functions In SQL Server?

clock April 6, 2023 07:34 by author Peter

SQL Server Functions
There are two kinds of Functions in SQL Server. A Function in SQL Server is a named set of SQL explanations that can be called and executed as a solitary unit. The function can perform estimations, control information, and return scalar or table-based results.

1. Scalar function
A Function capability returns a solitary worth, like a whole number, float, or string. Scalar capabilities are utilized inside SQL articulations, such as SELECT or WHERE statements, to perform computations or alter information.

Here is an illustration of a scalar Function that returns the length of a string,
CREATE FUNCTION fnStringLength (@str VARCHAR(50))
RETURNS INT
AS
BEGIN
    DECLARE @length INT
    SET @length = LEN(@str)
    RETURN @length
END


2. Table-valued functions
A table-valued function returns a table as its result set. Table-valued functions are often used to encapsulate complex queries and simplify their use in other parts of the database.

Here is an example of a table-valued function that returns a table of employees who have a salary greater than a specified amount,
CREATE FUNCTION fnGetHighSalaryEmployees (@salary FLOAT)
RETURNS TABLE
AS
RETURN (
    SELECT * FROM Employees WHERE Salary > @salary
)


Functions can be created using the CREATE FUNCTION statement and called using the SELECT statement or within other SQL statements. It's important to note that functions can significantly impact performance, so it's essential to use them judiciously and test their performance in real-world scenarios.

HostForLIFEASP.NET SQL Server 2019 Hosting



European SQL Server 2019 Hosting :: Compare Comma Seperated Values in SQL

clock March 28, 2023 08:54 by author Peter

In this article, we will see how we can compare the comma-separated values with existing values stored in the table. Values can be in a different order from the user or table. We have to first order it both sides and then compare them.

Create a Table and Insert the Data
Here I create a table with only one column which contains values with commas. And insert 4 values as of now.
CREATE TABLE COMMA_SEPERATE_VALUES
(
    ITEM VARCHAR(500) NOT NULL
)
INSERT INTO COMMA_SEPERATE_VALUES VALUES
('123,456,789'),
('ABC,XYZ,PQR'),
('321,654,987'),
('ZXY,BCA,QRP')

But before starting the Query for comparing values, we must clarify the following concepts.

String_split Function
The STRING_SPLIT() function is a table-valued function that splits a string into a table that consists of rows of substrings based on a specified separator.

Syntax
STRING_SPLIT ( input_string , separator )
    input_string is a string in which we want to perform the operation. The input string must be varchar,nvarchar, nchar, or char.
    Separator is a single character value using which we want to split the string.

SELECT [VALUE] FROM string_split('456,123,789',',') ORDER BY [VALUE]

When we execute the above Query, it will return a table with values in ascending order, as shown below.

FOR XML PATH
In SQL Server, the FOR XML clause allows us to return the results of a query as an XML document. Simply placing the FOR XML clause at the end of the Query will output the results in XML.

But more importantly, it will return data in a single row no matter if you have n numbers of the data row.
SELECT [VALUE] FROM string_split('456,123,789',',') ORDER BY [VALUE] ASC FOR XML PATH

Executing the above Query will return output as shown in the image below.


As you can see in the above image, the output has the main row element and then the element of the column name. But for this use case, we don’t need that. We can also rename the row root tag name by specifying the name after the path.
SELECT [VALUE] FROM string_split('456,123,789',',') ORDER BY [VALUE] ASC FOR XML PATH('')

The above Query will return output as shown in the below image, but still, it comes with a column name as an element, but we don’t need that. We need values with commas. To get this type of output, we can use the below Query.
SELECT ','+[VALUE] FROM string_split('456,123,789',',') ORDER BY [VALUE] ASC FOR XML PATH('')

In the above Query, we are Concating column value after the comma and not specifying any column name. So it will not generate any tag for this column, and we will get the output as shown below.

STUFF Function
The STUFF() function deletes a part of a string and then inserts another part into the string, starting at a specified position.

Syntax
STUFF(string, startIndex, length, new_string)
    string: The string to be modified
    startIndex: The position in the string to start to delete some characters
    length: The number of characters to delete from the string
    new_string:  The new string to insert into the string at the start position

SELECT STUFF('PETER',1,6,'SCOTT')

In the above Query, it will start replacing from 1st index to 6 characters. So as you can see below the image, it will replace PETER with SCOTT.

SELECT STUFF((SELECT ','+[VALUE] FROM string_split('456,123,789',',') ORDER BY [VALUE] ASC FOR XML PATH('')),1,1,'')

The above Query will return a string by removing the first comma from the string so we can use it in our comparison.

Compare Comma Separate String With Table
In our table, there are 4 records, as you can see in the below image.
SELECT * FROM COMMA_SEPERATE_VALUES


Now we want to check if any record contains these 456,123,789 Values.
SELECT * FROM COMMA_SEPERATE_VALUES
WHERE
STUFF((SELECT ','+[VALUE] FROM string_split(ITEM,',') ORDER BY [VALUE] ASC FOR XML PATH('')),1,1,'')
=STUFF((SELECT ','+[VALUE] FROM string_split('456,123,789',',') ORDER BY [VALUE] ASC FOR XML PATH('')),1,1,'')


In the above Query, we check both sides of the string by equal to the operator.

In the where clause, we used the same functions discussed above.

The left-hand side value comes from the table, and the right side comes from the user’s input.

As you can see in the below image, it will return output with the matched record.


SELECT * FROM COMMA_SEPERATE_VALUES
WHERE
STUFF((SELECT ','+[VALUE] FROM string_split(ITEM,',') ORDER BY [VALUE] ASC FOR XML PATH('')),1,1,'')
=STUFF((SELECT ','+[VALUE] FROM string_split('820,123,789',',') ORDER BY [VALUE] ASC FOR XML PATH('')),1,1,'')


We don’t have any record with 820,123,789 value, so it will not return any record from the table.

I hope you find this article helpful. If you have any doubts, you can ask in the comments below.



European SQL Server 2019 Hosting :: How To Pass JSON As Parameter To Store Procedure?

clock March 21, 2023 09:30 by author Peter

In today's world, where data is king, storing and retrieving data has become crucial. A popular method of storing data is using a database management system (DBMS), and one of the most popular DBMSs is SQL. SQL (Structured Query Language) is a programming language to manage data stored in relational databases. SQL procedures are a powerful way to execute complex logic in the database and can be used to retrieve, manipulate and process data.JSON (JavaScript Object Notation) is a lightweight data interchange format that is easy to read and write. It has become a popular way to exchange data between web applications and APIs. SQL Server 2016 introduced built-in support for JSON data in SQL Server, which has made it possible to store, retrieve and process JSON data in the database.Passing JSON data as a parameter to SQL procedures has become a common requirement in today's applications. This article will explore how to pass JSON data as a parameter to an SQL procedure.The first step is to create an SQL procedure that accepts a JSON parameter. The syntax for creating an SQL procedure that accepts a JSON parameter is as follows,

CREATE PROCEDURE [schema].[procedure_name]
@json_data NVARCHAR(MAX)
AS
BEGIN
-- SQL Procedure Logic
END


The parameter @json_data is of the NVARCHAR(MAX) type, which is used to store large Unicode character strings. This parameter will be used to pass JSON data to the SQL procedure.Once the SQL procedure is created, we can pass the JSON data as a parameter to the procedure using the EXECUTE command. The syntax for passing JSON data as a parameter to an SQL procedure is as follows,

DECLARE @json_data NVARCHAR(MAX)SET @json_data = '{"name": "John", "age": 30, "city": "New York"}'EXECUTE [schema].[procedure_name] @json_data

In the above example, we have declared a variable @json_data and assigned a JSON object to it. We have then passed this JSON object as a parameter to the SQL procedure using the EXECUTE command.We can use the OPENJSON function inside the SQL procedure to parse the JSON data passed as a parameter. The OPENJSON function converts the JSON data into a table, which we can then use to manipulate the data. The syntax for using the OPENJSON function inside an SQL procedure is as follows,

CREATE PROCEDURE [schema].[procedure_name]
@json_data NVARCHAR(MAX)
AS
BEGIN
DECLARE @name VARCHAR(50)
DECLARE @age INT
DECLARE @city VARCHAR(50)

SELECT @name = value FROM OPENJSON(@json_data) WHERE [key] = 'name'
SELECT @age = value FROM OPENJSON(@json_data) WHERE [key] = 'age'
SELECT @city = value FROM OPENJSON(@json_data) WHERE [key] = 'city'

-- SQL Procedure Logic using @name, @age and @city variables
END

In the above example, we have declared three variables, @name, @age, and @city, which will be used to store the data from the JSON object. We then used the OPENJSON function to extract the values from the JSON object and assigned them to the corresponding variables. We can then use these variables in our SQL procedure logic.

In conclusion, passing JSON data as a parameter to SQL procedures is a powerful technique that can be used to store, retrieve and process JSON data in the database. With the built-in support for JSON data in SQL Server, it has become much easier to work with JSON data in the database.

HostForLIFEASP.NET SQL Server 2019 Hosting



European SQL Server 2019 Hosting :: Types Of SQL Commands

clock March 17, 2023 10:02 by author Peter

SQL commands are used in SQL to perform various functions. These functions include building database objects, manipulating objects, populating database tables with data, updating existing data in tables, deleting data, performing database queries, controlling database access, and overall database administration.

The main categories of SQL Commands are,
    Data Definition Language (DDL)
    Data Manipulation Language (DML)
    Data Query Language (DQL)
    Data Control Language (DCL)
    Transaction Control Language (TCL)

Data Definition Language (DDL)
Data Definition Language (DDL) is a set of SQL commands used to define a database's structure, including tables, indexes, and constraints.DDL commands are used to create, modify, and delete database objects. Here are some common DDL commands:

CREATE
This command creates new database objects, such as tables, views, indexes, and constraints.

Example
CREATE TABLE Employee (
   EmployeeID INT PRIMARY KEY,
   FirstName VARCHAR(50),
   LastName VARCHAR(50),
   Email VARCHAR(100),
   HireDate DATE,
   Salary DECIMAL(10, 2),
   DepartmentID INT
);

CREATE VIEW RecentOrders AS
SELECT OrderID, OrderDate, CustomerID, TotalAmount
FROM Orders

CREATE INDEX idx_Employee_DepartmentID ON Employee (DepartmentID);

CREATE PROCEDURE InsertOrder
  @OrderDate DATE,
  @CustomerID INT,
  @TotalAmount DECIMAL(10,2)
AS
BEGIN
  INSERT INTO Orders (OrderDate, CustomerID, TotalAmount)
  VALUES (@OrderDate, @CustomerID, @TotalAmount)
END;

CREATE FUNCTION GetYearsWithCompany (@EmployeeID INT)
RETURNS INT
AS
BEGIN
  DECLARE @YearsWithCompany INT;
  SELECT @YearsWithCompany = DATEDIFF(YEAR, HireDate, GETDATE())
  FROM Employees
  WHERE EmployeeID = @EmployeeID;
  RETURN @YearsWithCompany;
END;

CREATE TRIGGER OrderAuditTrigger
ON Orders
AFTER INSERT
AS
BEGIN
  INSERT INTO OrderAudit (OrderID, OrderDate, CustomerID, TotalAmount)
  SELECT OrderID, OrderDate, CustomerID, TotalAmount
  FROM inserted;
END;

ALTER
This command is used to modify the structure of existing database objects, such as adding or removing columns from a table or changing the data type of a column.

Examples
ALTER TABLE Employees
ADD EmailAddress varchar(100);

ALTER TABLE Employees
DROP COLUMN EmailAddress;

ALTER TABLE Employees
ALTER COLUMN Salary decimal(10, 2);

ALTER TABLE Employees
ADD CONSTRAINT PK_Employees PRIMARY KEY (EmployeeID);

ALTER VIEW SalesData
AS SELECT ProductID, ProductName, QuantitySold
FROM Sales
WHERE SaleDate BETWEEN '2022-01-01' AND '2022-12-31';

ALTER PROCEDURE GetEmployeesByDepartment
   @DepartmentID int
AS
BEGIN
   SELECT * FROM Employees WHERE DepartmentID = @DepartmentID;
END;

ALTER INDEX idx_Employees_LastName
ON Employees(LastName, FirstName)
INCLUDE (Email);

ALTER FUNCTION GetTotalSales
(@StartDate DATE, @EndDate DATE)
RETURNS MONEY
AS
BEGIN
   DECLARE @TotalSales MONEY;
   SELECT @TotalSales = SUM(TotalAmount)
   FROM Sales
   WHERE SaleDate BETWEEN @StartDate AND @EndDate;
   RETURN @TotalSales;
END;

ALTER TRIGGER trg_Employees_Insert
ON Employees
AFTER INSERT
AS
BEGIN
   INSERT INTO EmployeeAudit(EmployeeID, AuditDate, EventType)
   SELECT EmployeeID, GETDATE(), 'INSERT'
   FROM inserted;
END;


DROP
This command deletes an existing database object, such as a table, view, or index.
DROP TABLE Employee;
DROP VIEW Get_EmployeeDetail;
DROP INDEX idx_Employees_Name;
DROP PROCEDURE GetEmployeesByDepartment;
DROP FUNCTION my_function;
DROP TRIGGER my_trigger ON my_table;


TRUNCATE
This command deletes all data from a table but keeps the table structure intact.
TRUNCATE TABLE suppliers;

RENAME
This command renames an existing database object, such as a table or column.
EXEC sp_rename 'old_table_name', 'new_table_name';
EXEC sp_rename 'table_name.old_column_name', 'new_column_name', 'COLUMN';

Note: SQL Server doesn't support the RENAME keyword in the ALTER TABLE statement. Instead, you can use the sp_rename system stored procedure to rename a table and table column.

COMMENT
This command adds comments to a database object, such as a table or column, to provide additional information about the object.

Single-line comments: These comments start with two hyphens "--" and continue until the end of the line. For example:
SELECT * FROM customers -- This is a comment
Multi-line comments: These comments start with "/" and end with "/". They can span multiple lines. For example:
/* This is a
multi-line comment */

In short, DDL commands are used for creating and modifying the structure of a database.

Data Manipulation Language (DML)

Data Manipulation Language (DML) is a set of SQL commands used to manipulate data stored in a database. DML commands retrieve, insert, update, and delete data in tables. Here are some common DML commands:
SELECT

This command retrieves data from one or more tables in a database.
SELECT column1, column2, ..., columnN FROM table_name;
SELECT name, email FROM customers;
SELECT * FROM customers;


INSERT
This command is used to insert new data into a table.
INSERT INTO customers  VALUES ('RAJ', '[email protected]', '7019160263');
INSERT INTO customers (name, email, phone) VALUES ('RAJ', '[email protected]', '7019160263');
INSERT INTO OrderDetail (CustomerName, City, Country)
SELECT Name, City, Country FROM Customers;

UPDATE
This command is used to modify existing data in a table.
UPDATE customers
SET email = '[email protected]', first_name = 'Rj'
WHERE id=1


DELETE
This command is used to delete data from a table.
DELETE FROM customers; -- delete all data
DELETE FROM customers -- delete record from customers which id is 5
WHERE id = 5;

MERGE
This command performs insert, update, or delete operations on a target table based on the data in a source table.

-- Insert Merge
MERGE employees AS target
USING employees_new AS source
ON (target.id = source.id)
WHEN NOT MATCHED THEN
  INSERT (id, name, salary)
  VALUES (source.id, source.name, source.salary);

-- Update Merge

MERGE INTO customers c
USING (
  SELECT id, phone, address
  FROM customers
  WHERE email IN ('[email protected]', '[email protected]', '[email protected]')
) s
ON (c.id = s.id)
WHEN MATCHED THEN
  UPDATE SET c.phone = s.phone, c.address = s.address;

-- Delete Merge

MERGE INTO orders o
USING (
  SELECT order_id
  FROM orders
  WHERE order_date < '2022-01-01'
) s
ON (o.order_id = s.order_id)
WHEN MATCHED THEN DELETE;


DML commands are essential for managing the data stored in a database. By using DML commands, users can add, update, or delete data in a table, which is crucial for maintaining the data's accuracy and integrity.

Data Query Language (DQL)
Data Query Language (DQL) is a subset of SQL commands used to retrieve data from one or more tables in a database. DQL commands are also known as data retrieval commands.

Here are some common DQL commands,
SELECT

This command retrieves data from one or more tables in a database.
SELECT column1, column2, ..., columnN FROM table_name;
SELECT name, email FROM Employees;
SELECT * FROM Employees;

DISTINCT
This command is used to retrieve unique values from a column in a table.

SELECT DISTINCT category
FROM products;


WHERE
This command is used to filter data based on specific criteria.

SELECT *
FROM customers
WHERE age > 30;

UPDATE customers
SET email = '[email protected]'
WHERE id = 1;

DELETE
FROM customers
WHERE age > 30 AND email LIKE '%@yahoo.com';


ORDER BY
This command is used to sort data in ascending or descending order.

SELECT *
FROM customers
ORDER BY age DESC;


UPDATE customers
SET age = age + 1
WHERE id IN (
    SELECT id
    FROM customers
    ORDER BY age ASC
);

DELETE FROM customers
WHERE age > 50
ORDER BY age DESC;


GROUP BY
This command is used to group data based on one or more columns.

SELECT product, SUM(quantity * price) as total_sales
FROM sales
GROUP BY product;

UPDATE sales
SET price = (
    SELECT AVG(price)
    FROM sales
    WHERE product = sales.product
)
WHERE product IN (
    SELECT product
    FROM sales
    GROUP BY product
);

DELETE FROM sales
WHERE product IN (
    SELECT product
    FROM sales
    GROUP BY product
    HAVING COUNT(*) = 1
);


JOIN
This command combines data from two or more tables into a single result set.

SELECT orders.id, customers.name, orders.product, orders.quantity, orders.price
FROM orders
INNER JOIN customers ON orders.customer_id = customers.id;

SELECT departments.name AS department_name, employees.name AS employee_name
FROM departments
RIGHT JOIN employees ON departments.id = employees.department_id;

SELECT employees.name AS employee_name, departments.name AS department_name
FROM employees
LEFT JOIN departments ON employees.department_id = departments.id;

SELECT employees.name AS employee_name, departments.name AS department_name
FROM employees
FULL JOIN departments ON employees.department_id = departments.id;

SELECT A.EmployeeName AS EmployeeName1, B.EmployeeName AS EmployeeName2, A.City
FROM Employee A, Employee B
WHERE A.EmployeeID <> B.EmployeeID
AND A.City = B.City
ORDER BY A.City;


DQL commands are essential for retrieving data from a database. Using DQL commands, users can filter, sort, and group data based on specific criteria,  which is crucial for analyzing and interpreting the data stored in the database.

Data Control Language (DCL)
Data Control Language (DCL) is a set of SQL commands used to control access to a database. DCL commands are used to grant or revoke permissions to users and roles.
Here are some common DCL commands:

GRANT
This command is used to grant permissions to a user or a role.
GRANT SELECT ON mydatabase.mytable TO myuser;
GRANT SELECT, INSERT, UPDATE ON mydatabase.mytable TO myuser;
GRANT SELECT, INSERT ON mydatabase.* TO myuser;


REVOKE
This command is used to revoke permissions from a user or a role.
REVOKE SELECT ON mydatabase.mytable FROM myuser;
REVOKE SELECT, INSERT, UPDATE ON mydatabase.mytable FROM myuser;
REVOKE ALL PRIVILEGES ON mydatabase.* FROM myuser;
REVOKE SELECT, INSERT ON mydatabase.* FROM myuser;


This command is used to deny permissions to a user or a role.
DENY SELECT ON mydatabase.mytable TO myuser;
DENY SELECT, INSERT, UPDATE ON mydatabase.mytable TO myuser;


DCL commands are essential for managing access to a database. Using DCL commands, database administrators can control who has access to the database and what actions they can perform on the data stored in the database.

This is critical for maintaining the security and integrity of the data stored in the database.

Transaction Control Language (TCL)
Transaction Control Language (TCL) is a set of SQL commands used to manage transactions in a database. A transaction is a sequence of one or more SQL statements treated as a single unit of work. TCL commands are used to commit or rollback transactions. Here are some common TCL commands:

COMMIT
This command permanently saves the changes made by a transaction to the database.
CREATE PROCEDURE update_employee_salary
    @employee_id INT,
    @new_salary DECIMAL(10, 2)
AS
BEGIN
    BEGIN TRANSACTION;

    UPDATE company.employees
    SET salary = @new_salary
    WHERE id = @employee_id;

    COMMIT;
END


ROLLBACK
This command is used to undo the changes made by a transaction and restore the database to its previous state.
CREATE PROCEDURE update_employee_salary
    @employee_id INT,
    @new_salary DECIMAL(10, 2)
AS
BEGIN
    BEGIN TRANSACTION;

    UPDATE company.employees
    SET salary = @new_salary
    WHERE id = @employee_id;

    IF @@ERROR <> 0
    BEGIN
        ROLLBACK;
        RETURN;
    END;

    COMMIT;
END

SAVEPOINT
This command is used to set a savepoint within a transaction, which allows you to roll back to a specific point in the transaction.
CREATE PROCEDURE transfer_funds
    @from_account INT,
    @to_account INT,
    @amount DECIMAL(10, 2)
AS
BEGIN
    BEGIN TRANSACTION;

    -- Savepoint
    SAVE TRANSACTION transfer_start;

    UPDATE bank.accounts
    SET balance = balance - @amount
    WHERE id = @from_account;

    IF @@ERROR <> 0
    BEGIN
        -- Rollback to savepoint
        ROLLBACK TRANSACTION transfer_start;
        RETURN;
    END;

    UPDATE bank.accounts
    SET balance = balance + @amount
    WHERE id = @to_account;

    IF @@ERROR <> 0
    BEGIN
        -- Rollback entire transaction
        ROLLBACK;
        RETURN;
    END;

    COMMIT;
END

SQL

RELEASE SAVEPOINT

This command is used to remove a savepoint within a transaction.

CREATE PROCEDURE example_procedure
AS
BEGIN
    BEGIN TRANSACTION;

    -- Perform some operations
    INSERT INTO myTable (column1, column2) VALUES (1, 'A');
    INSERT INTO myTable (column1, column2) VALUES (2, 'B');
    INSERT INTO myTable (column1, column2) VALUES (3, 'C');

    -- Set a savepoint
    SAVE TRANSACTION mySavepoint;

    -- More operations
    INSERT INTO myTable (column1, column2) VALUES (4, 'D');
    INSERT INTO myTable (column1, column2) VALUES (5, 'E');

    -- Check for errors
    IF @@ERROR <> 0
    BEGIN
        -- Rollback to savepoint
        ROLLBACK TRANSACTION mySavepoint;
        RETURN;
    END;

    -- Mark the savepoint as complete
    RELEASE SAVEPOINT mySavepoint;

    COMMIT;
END;


TCL commands are essential for managing transactions in a database. Using TCL commands, users can ensure that changes made to the database are consistent and reliable, even if there are errors or failures during the transaction. This is critical for maintaining the integrity of the data stored in the database.

Note
it's generally a good practice to use transactions in stored procedures to ensure data integrity and prevent data corruption. Using transactions and committing or rolling back changes as needed can help ensure that your database remains consistent and reliable.

Conclusion
I hope the blog has helped you understand SQL commands.

HostForLIFEASP.NET SQL Server 2019 Hosting



European SQL Server 2019 Hosting :: SQL Server Execution Plans

clock March 13, 2023 07:45 by author Peter

There are some questions that those who are familiar with the T-SQL dialect of SQL ask themselves:
    Why is my query working slowly?
    Does the query I wrote use an index?
    Why is the index I created not used?
    Why is this query slower (faster) than the other one, despite returning the same result?

If you want answers to these questions, one of the tools that must be used to understand them is Execution plans.

What is an Execution Plan?

EThe executionPlan is a description of the most efficient execution of the query we wrote, calculated by the optimizer. That is, Execution Plan shows us in which version your query is implemented, and using which algorithm it is executed.

In short, how SQL Server will or has executed our query.

Let's look at everything from the beginning,

After we write a query, the execution of the request goes through several stages. Those stages are concentrated into 2 large groups:

  • Events in relation engine (RE)
  • Events in Storage Engine (SE)

In the RE(relation engine), the query is parsed and executed by the query optimizer. The query optimizer prepares an execution plan. This Execution Plan is then sent in binary format to the SE(storage engine), which uses this plan during the execution of queries. Transaction and index operations are realized in the storage engine.

IToexplain what we said in a consistent and simple form, let's first open SQL SERVER and let such a request be executed. (Queries will be applied to the AdventureWorks2012 database)
use AdventureWorks2012
GO
SELECT
         hremphist.businessentityid
       , person.person.firstname
       , person.person.lastname
       , humanresources.employee.jobtitle
       , humanresources.department.name AS DepartmentName
       , hremphist.startdate
FROM (
                 SELECT
                         businessentityid
                       , departmentid
                       , startdate
                       , ROW_NUMBER() OVER (PARTITION BY businessentityid ORDER BY startdate DESC) AS rn
                 FROM humanresources.employeedepartmenthistory
         ) AS hremphist
         JOIN humanresources.department
                 ON humanresources.department.departmentid = hremphist.departmentid
         JOIN humanresources.employee
                 ON humanresources.employee.businessentityid = hremphist.businessentityid
         JOIN person.person
                 ON person.businessentityid = hremphist.businessentityid
WHERE hremphist.rn = 1;


We can see the Execution Plan of the query by pressing Ctrl+L
Let's see together what execution processes the above request goes through,

1) Query Parsing
As soon as we execute the query we wrote, it is taken by RE and checked whether it is true or false from the syntactic side (Parsing). If there are no syntactic problems, the query has been successfully parsed. It should be remembered that if the query is DDL, it is not optimized! Because CREATE TABLE, which is a DDL operation of the example, has only one syntax. Only DML operations can be optimized. The result of the query parsing process gives us a parse tree (sequence tree, query tree).

2) Algebrizer
If the DML request has passed the parsing process successfully, it is sent to Algebrizer, a special mechanism. Algebrizer performs logical processes such as the existence of objects used by us, the names of table columns, compatibility of types, etc. As a result of these processes, Algebrizer returns us a binary format. Returns a "query processor tree", which results are processed by the query optimizer.

3) Query Optimizer prepares Execution Plan for us.

4) Storage Engine
Physical processing of the request is performed at this stage, and the request is executed based on the Execution Plan.

The SQL Query Optimizer performs the generation and preparation of Execution Plans. When preparing an Execution Plan, it necessarily refers to statistics and indexes and uses the information collected there by the execution plan maker. In the next step, the Execution Plan given by the Query Optimizer is sent to the storage engine; based on this, our query is physically executed. It should be remembered that when an Execution Plan is prepared, the prepared plan is added to the cache (processor cache) so that it is stored in the memory. During the next similar Execution Plan preparation, the Optimizer does not have to do additional work and can use the previously generated plan.

Depending on the complexity of the query, T-SQL QUERY OPTIMIZER generates several Execution Plans for us and tries to choose the fastest one among them. However, it is necessary to consider that there are often queries for which Execution Plans can be generated for their implementation, which may take several minutes. Therefore, Query Optimizer makes the necessary selection among the first generated plan combinations, not all. Gave us the perfect Execution Plan. Our EPs will be stored in the plan cache not to generate a plan for similar questions every time.

To delete cached data, run the following command:
DBCC FREEPROCCACHE

We can also delete the required plan by passing sql_handle and plan_handle specifically.

What can cause the current Execution Plan to change?

    Execution of parallel queries
    The statistics had changed or became outdated when the Execution Plan was made.
    Entering information into the temporary table

It should be remembered that there are 2 main forms of EP(Execution Plan) in T-SQL,
    The estimated Execution Plan (Estimated execution plan) is the plan the SQL Optimizer estimates before the query is executed.
    Actual Execution Plan (Real execution plan) – is a plan received only after the request is released for execution.

Although these plans store a completely different set of data, they do not differ from each other at first glance.

There are 3 main forms of execution plans,
    Graphical description (Graphical plans)
    Description in text format (Text Plans)
    In XML format (XML Plans)

The most commonly used representation form of execution plans is a graphical plan. Although the graphic description form does not reflect all the details at first glance, detailed information can be seen in the outer panel.

Although Text Plans have been declared "deprecated" by SQL Server, we can still use them. In the Text version, you can get detailed information about the plan the first time, speed it up, and edit it in a text editor.
SET SHOWPLAN_ALL ON;
GO
SELECT TOP 100 [BusinessEntityID]
      ,[NationalIDNumber]
      ,[LoginID]
      ,[OrganizationNode]
      ,[OrganizationLevel]
      ,[JobTitle]
      ,[BirthDate]
      ,[MaritalStatus]
      ,[Gender]
      ,[HireDate]
      ,[SalariedFlag]
      ,[VacationHours]
      ,[SickLeaveHours]
      ,[CurrentFlag]
      ,[rowguid]
      ,[ModifiedDate]
FROM [ADV_Works].[HumanResources].[Employee]


In the test code example above, we enabled the detailed description of the "estimated" plan in text form. To disable this mode, write OFF instead oExecutionrun the query.
SET SHOWPLAN_TEXT ON;--active
GO
SET SHOWPLAN_TEXT OFF;--deactive
GO
SET STATISTICS PROFILE ON;--active text mode for actual plan


XML Plans - XML plans have 2 forms of description:

  • SHOWPLAN_XML-is generated until execution
  • STATISTICS_XML is generated after query execution

The most used form of EPs is a graphical form (Graphical Execution plans).

  • Ctrl+M (for Actual) Ctrl+L (for Estimated)
  • By selecting the "Include Actual Execute plan" or "Display Estimated Execute Plan" button on the toolbar
  • By right-clicking on the session, we wrote the query and selected "Include Actual Execution Plan" or "Display Estimated Execute plan" from the drop-down menu.

HostForLIFEASP.NET SQL Server 2019 Hosting

 



European SQL Server 2019 Hosting :: What is a Schema in SQL Server? With Example

clock March 6, 2023 06:53 by author Peter

Do you find it difficult to manage and organize your SQL Server databases? Are you constantly struggling to navigate through a sea of tables, views, and stored procedures? If so, you're not alone. Many developers and database administrators face this same problem every day, and it can be frustrating and time-consuming.

But what if there was a solution to this problem? What if there was a way to simplify database management and make it easier to access the data you need? That's where schemas come in.

In this article, we'll explain what a schema is in SQL Server and how it can help solve your database management woes. We'll delve into the benefits of using schemas and show you how to create and manage them in your SQL Server environment.

With our step-by-step guide, you'll learn how to use schemas to organize your database objects into logical groups, improve security by controlling access to specific schemas, and simplify database maintenance by reducing complexity. So, if you're ready to take your SQL Server skills to the next level and tackle the problem of database management head-on, let's dive in.

A schema in a SQL database is a collection of logical data structures. The schema is owned by a database user and has the same name as the database user. From SQL Server 2005, a schema is an independent entity (container of objects) different from the user who creates that object. In other words, schemas are very similar to separate namespaces or containers that are used to store database objects.

Security permissions can be applied to schemas; hence, schemas are essential for separating and protecting database objects based on user access rights. It improves flexibility for security-related administration of the database.
User schema separation

Before SQL Server, database object owners and users were the same things, and database objects (table, index, view, and so on) were owned by the user. In other words, database objects were directly linked to the user, and the user could not delete them without removing the database object that was associated with the user.

In SQL Server, a schema separation is introduced; now, the database object is no longer owned by a user, group, or role. The schema can be owned by the user, group, or role. The schema can have multiple owners. The schema ownership is transferrable. Database objects are created within the schema. Now the user can be dropped off without dropping off the database object owned by the user. But the schema cannot be deleted if it contains a database object.

The following are the advantages of user schema separation:

    The schema ownership is transferrable.
    Database objects can be moved among the schemas.
    A single schema can be shared among multiple users.
    A user can be dropped without dropping the database objects associated with the user.
    Provides more control of access and level of access.

Default schema

The default schema is the first schema searched when resolving object names. The user can be defined within the default schema. Using the "SCHEMA_NAME" function, we can determine the default schema for the database.

The schema can be the default for the user by defining DEFAULT_SCHEMA with CREATE USER or ALTER USER. If no default schema is defined, then SQL will assume "DBO" as the default schema. Note that no default schema is associated with a user if the user is authenticated as a member of the group in the Windows operating system. In this case, a new schema will be created, and the name will be the same as the user name.

Advantages of using Schema

  • Act as object protection tool: A schema can be a very effective object projection tool combined with the appropriate level of user permissions. A DBA can maintain control access to an object, which would be crucial.
  • Managing a logical group of database objects within a database: Schemas allow database objects to be organized into a logical group. This would be advantageous when multiple teams are working on the same database application, and the design team wants to maintain the integrity of the database tables.
  • Easy to maintain the database: A schema allows a logical grouping of the database objects. The schema can help us when the database object name is the same but falls into a different logical group.


Other Advantages

  • A single schema can be shared among multiple databases and database users.
  • A database user can be dropped without dropping database objects.
  • Manipulation of and access to the object is now complex and more secure. The schema acts as an additional layer of security.
  • Database objects can be moved among schemas.
  • The ownership of schemas is transferable.

Example of a Schema in SQL
Let's say we have a database that contains information about a company's employees, departments, and projects. To organize this information, we can create separate schemas for each of these categories.

For example, we can create a schema called "employees" to store tables related to employee information, such as employee names, job titles, and salaries. We can also create a schema called "departments" to store tables related to department information, such as department names and locations. Finally, we can create a schema called "projects" to store tables related to project information, such as project names, budgets, and timelines.

By creating separate schemas for each category, we can easily manage and access the information we need without having to navigate through a large and complex database. This also helps to improve security by restricting access to certain schemas based on user roles and permissions.

To create a schema in SQL Server, we can use the following syntax:
CREATE SCHEMA schema_name

For example, to create a schema called "employees", we can use the following query:
CREATE SCHEMA employees

We can then create tables within the schema using the following syntax:
CREATE TABLE schema_name.table_name

For example, to create a table called "employee_info" within the "employees" schema, we can use the following query:
CREATE TABLE employees.employee_info (
   employee_id INT PRIMARY KEY,
   first_name VARCHAR(50),
   last_name VARCHAR(50),
   job_title VARCHAR(50),
   salary DECIMAL(10,2)
);


This creates a table within the "employees" schema that stores information about employee IDs, first names, last names, job titles, and salaries.

A schema in SQL Server is a way to organize database objects such as tables, views, and stored procedures into logical groups. By creating separate schemas for different categories of information, we can easily manage and access the data we need, improve security, and simplify database maintenance.

HostForLIFEASP.NET SQL Server 2019 Hosting



European SQL Server 2019 Hosting :: Weird Thing With Inner Queries in Sql Server

clock March 2, 2023 07:24 by author Peter

A few days ago, I found an extraordinary thing with SQL SERVER 2005, and I strongly feel that this is a bug in SQL Server that can create lots of data discrepancies.

Let's go through the script.
    Create a new fresh database.
    CREATE DATABASE DB_TEST


Use this Database.
USE DB_TEST

Create a table name tblCategory.
CREATE TABLE tblCategory
(
    CategoryID INT PRIMARY KEY,
    CategoryNAME VARCHAR(50)
)

Create another table named tblProduct.
CREATE TABLE tblProduct
(
    ProductID INT PRIMARY KEY,
    CategoryID INT FOREIGN KEY REFERENCES tblCategory(CategoryID),
    IsDamaged BIT
)

Insert 5 rows in tblCategory.
INSERT INTO tblCategory VALUES (1,'Category1')
INSERT INTO tblCategory VALUES (2,'Category2')
INSERT INTO tblCategory VALUES (3,'Category3')
INSERT INTO tblCategory VALUES (4,'Category4')
INSERT INTO tblCategory VALUES (5,'Category5')


Insert 10 rows in tblProduct.
INSERT INTO tblProduct VALUES (1,1,0)
INSERT INTO tblProduct VALUES (2,1,0)
INSERT INTO tblProduct VALUES (3,2,0)
INSERT INTO tblProduct VALUES (4,2,0)
INSERT INTO tblProduct VALUES (5,3,0)
INSERT INTO tblProduct VALUES (6,3,0)
INSERT INTO tblProduct VALUES (7,4,0)
INSERT INTO tblProduct VALUES (8,4,0)
INSERT INTO tblProduct VALUES (9,4,0)
INSERT INTO tblProduct VALUES (10,5,0)


Select statements to confirm that data is entered or not.
SELECT * FROM tblCategory
SELECT * FROM tblProduct


Here is a select query that is incorrect. The query tells us to select ProductID from tblCategory where categoryId = 1, But the tblCategory table does not have a column named ProductID. So when we execute this query, it throws an error, which is the expected behavior.
SELECT ProductID FROM tblCategory WHERE CategoryID = 1

Here is the magic. I have used the above incorrect select query with an update statement as an inner query. What do you think? What should happen when you execute this query? This query should throw an error as my inner select query is incorrect. But execute this query, and you will be shocked.
UPDATE tblProduct SET IsDamaged = 1
WHERE ProductID IN
(SELECT ProductID FROM tblCategory WHERE CategoryID = 1)


Oops!!!! All the data in IsDamaged is set to 1, but my inner select query(SELECT ProductID FROM tblCategory WHERE CategoryID = 1) is wrong. 10 rows were affected.

Initially, I thought this was a bug, but it's not. The inner query first tries to find the column in the current table (inner query's table), and if it is not found, it will look for the outer query table. It is the best practice to use the tableName.ColumnName in the inner query.
UPDATE tblProduct SET IsDamaged = 1
WHERE ProductID IN
(SELECT tblCategory.ProductID FROM tblCategory WHERE CategoryID = 1)


Now this inner query will throw an error. So next time, be careful whenever you are working with inner queries.

HostForLIFEASP.NET SQL Server 2019 Hosting



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in