European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

European SQL Server 2022 Hosting :: Understanding SQL Server Temporal Tables

clock June 5, 2024 07:31 by author Peter

A robust feature that was added to SQL Server 2016 are temporal tables, which offer an integrated way to store and retrieve past data. They make it possible for you to monitor all alterations made to the data in a table, which can be very helpful for data analysis, auditing, and compliance. The definition, operation, and practical applications of temporal tables will all be covered in detail in this article.

How do Temporal Tables Work?
System-versioned tables, sometimes referred to as temporal tables, automatically keep track of all data modifications across time. They are made up of two parts:

  • Current Table: Stores the current data.
  • History Table: Automatically stores the historical versions of data.

When a row in the current table is updated or deleted, SQL Server moves the previous version of the row to the history table. This allows you to query historical data at any point in time.

Key Features of Temporal Tables

  • Automated Data Management: Automatically manages the movement of historical data to the history table.
  • Point-in-Time Analysis: Allows querying data as it appeared at any specific point in time.
  • Auditing and Compliance: Provides an audit trail of changes for regulatory compliance.
  • Data Recovery: Enables recovery of data to a previous state without complex restore operations.

Creating Temporal Tables
Creating a temporal table involves specifying system versioning during table creation. Here’s a step-by-step guide.
Define the Current Table: Include period columns for system start and end times.

CREATE TABLE Employees
(
    EmployeeID INT PRIMARY KEY,
    Name NVARCHAR(100),
    Position NVARCHAR(100),
    Salary DECIMAL(18, 2),
    SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START,
    SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END,
    PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime)
)
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.EmployeesHistory));

In this example,
SysStartTime and SysEndTime are the system period columns.
PERIOD FOR SYSTEM_TIME defines the period of system time.

Automatically Manage History Table
SQL Server creates and manages the history table.

Querying Temporal Tables

Temporal tables allow you to query historical data using the FOR SYSTEM_TIME clause.
SELECT * FROM Employees FOR SYSTEM_TIME BETWEEN '2023-01-01T00:00:00' AND '2023-01-01T23:59:59';

Retrieve Current Data
SELECT * FROM Employees;

Retrieve Data at a Specific Point in Time
SELECT * FROM Employees FOR SYSTEM_TIME AS OF '2023-01-01T12:00:00';

Retrieve Data Over a Time Range
SELECT * FROM Employees FOR SYSTEM_TIME BETWEEN '2023-01-01T00:00:00' AND '2023-01-01T23:59:59';

Retrieve All Historical Data
SELECT * FROM Employees FOR SYSTEM_TIME ALL;

Managing Temporal Tables
Turn Off System Versioning: You can turn off system versioning to make schema changes or manage data manually.
ALTER TABLE Employees SET (SYSTEM_VERSIONING = OFF);

Re-enable System Versioning
ALTER TABLE Employees SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.EmployeesHistory));

Cleanup Old Data: To manage the size of the history table, you can periodically archive or clean up old data:
DELETE FROM EmployeesHistory WHERE SysEndTime < '2022-01-01T00:00:00';

Best Practices

  • Indexing: Ensure proper indexing on period columns to optimize query performance.
  • Data Retention Policies: Implement data retention policies to manage the growth of the history table.
  • Security: Secure both current and history tables to prevent unauthorized access to sensitive historical data.

Conclusion
Temporal tables in SQL Server offer a robust solution for managing historical data, providing significant benefits for auditing, compliance, and point-in-time analysis. By automatically capturing and storing historical versions of data, they simplify the process of tracking changes over time. With the ability to query data as it existed at any point in time, temporal tables enhance the capabilities of SQL Server for modern data management needs. Implementing temporal tables involves a straightforward setup, and with best practices in place, they can significantly improve your data management strategy.



European SQL Server 2022 Hosting :: SQL Server System-Versioned Temporal Tables

clock May 30, 2024 07:30 by author Peter

A potent SQL feature that makes it possible to store previous data changes alongside the current data is system-versioned temporal tables. This functionality is very helpful for data consistency, auditing, and tracking changes over time. This paper examines the background, development, and necessity of system-versioned temporal tables, as well as their limitations and the most recent developments in the field. Furthermore, an example SQL code is included to show how they should be implemented.

Evolution and History
The necessity of handling time-sensitive data in databases gave rise to the idea of temporal tables. At first, handling historical data required developing unique solutions with complicated queries, extra tables, and triggers. These methods were frequently laborious and prone to mistakes.

With the release of SQL:2011, an ISO standard for SQL that outlined the support for system-versioned tables, temporal tables became a native capability. Subsequently, this capability was incorporated into the systems of major database suppliers, such as Microsoft SQL Server, Oracle, IBM Db2, and PostgreSQL, which made handling historical data easier.

The Need for System-Versioned Temporal Tables
System-versioned temporal tables address several critical needs.

  • Auditing and Compliance: Many industries require a detailed audit trail for compliance with regulatory standards. Temporal tables provide a straightforward way to track and retrieve historical data changes.
  • Data Analysis: Analyzing data changes over time can provide valuable insights. Temporal tables make it easier to perform such time-based analyses without additional overhead.
  • Error Correction: In case of accidental data modifications or deletions, temporal tables allow for easy retrieval of previous states, facilitating quick error correction.
  • Consistency and Integrity: Temporal tables ensure data consistency and integrity by maintaining a history of changes automatically, reducing the risk of data loss or corruption.

Drawbacks
Despite their advantages, system-versioned temporal tables have some drawbacks.

  • Storage Overhead: Storing historical data can significantly increase storage requirements. Proper planning and management are necessary to handle this overhead.
  • Performance Impact: Maintaining history can impact write performance, especially in high-transaction environments. Optimizations and indexing strategies are needed to mitigate this.
  • Complexity: Understanding and managing temporal tables can add complexity to the database schema and queries, requiring additional learning and expertise.

Latest Version and Features
The latest advancements in system-versioned temporal tables focus on improving performance, scalability, and ease of use. For instance, SQL Server 2019 introduced enhancements such as in-memory OLTP support for temporal tables, which helps to mitigate performance impacts.

Moreover, modern implementations provide better tools for querying historical data, including enhanced support for time-based joins and advanced filtering options.

Sample SQL Code
Below is a sample SQL code to create and use a system-versioned temporal table in Microsoft SQL Server.
-- Create a table with system-versioning enabled
CREATE TABLE Employees
(
    EmployeeID INT PRIMARY KEY,
    Name NVARCHAR(100),
    Position NVARCHAR(100),
    Salary DECIMAL(10, 2),
    SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
    SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
    PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime)
)
WITH
(
    SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.EmployeesHistory)
);

-- Insert data
INSERT INTO Employees (EmployeeID, Name, Position, Salary)
VALUES (1, 'John Doe', 'Manager', 75000);

-- Update data
UPDATE Employees
SET Salary = 80000
WHERE EmployeeID = 1;

-- Query current data
SELECT * FROM Employees WHERE EmployeeID = 1;

-- Query historical data
SELECT * FROM EmployeesHistory WHERE EmployeeID = 1;

-- Query data at a specific point in time
SELECT * FROM Employees
FOR SYSTEM_TIME AS OF '2024-01-01T00:00:00.0000000'
WHERE EmployeeID = 1;


Conclusion
System-versioned temporal tables are a significant evolution in SQL databases, addressing the need for robust time-based data management. While they come with certain drawbacks, their benefits in terms of auditing, compliance, data analysis, and error correction make them invaluable in modern data management. Continuous advancements in this area promise even greater efficiency and usability, making temporal tables an essential tool for contemporary database solutions.

HostForLIFEASP.NET SQL Server 2022 Hosting

 




European SQL Server 2022 Hosting :: How SQL Server Processes Your Query?

clock May 28, 2024 09:32 by author Peter

The main relational database product from Microsoft, SQL Server, is a complicated piece of software with a straightforward purpose: to process SQL queries and produce results as soon as feasible. Gaining an understanding of SQL Server's low-level workings can significantly boost the effectiveness and performance of your database applications. The goal of this blog is to clarify the procedure SQL Server goes through when handling your queries.

Parsing and Normalization
First, when a SQL query arrives, it undergoes parsing and normalization.

  • Parsing: This phase involves scanning the incoming SQL text and dividing it into individual keywords, expressions, and identifiers. Invalid syntax would lead to query termination at this stage.
  • Normalization: This stage, also known as algebrization, converts parsed SQL query to a tree of logical operators termed as “query tree”.

Compilation
Next, SQL Server attempts to compile the query.

  • Optimization: SQL Server's Query Optimizer evaluates different plans to execute and chooses the least costly one. The process it uses, interestingly, isn't exhaustive — checking every possible plan would require an unrealistic amount of time for complex queries. Instead, the optimizer uses heuristic algorithms and statistical metadata from distribution statistics objects to create a reasonable plan quickly.
  • Plan Generation: After the query optimization, SQL Server generates the query execution plan - the blueprint to execute the given query. These plans are stored in the "Plan Cache". If similar queries are used often, SQL Server can save resources by caching and reusing their execution plans.

Execution
With the plan in place, SQL Server moves to executing the query.

  • Execution Context Generation: The Query Execution engine generates an execution context for the query, a set of instructions that execute in line with the generated plan.
  • Data Retrieval: The system then undertakes activities like opening file handles, memory allocation based on the generated steps. Pages with necessary records are loaded from the disk into the buffer, if they aren't already there.
  • Data Return: After all processing steps are carried out; the Server retrieves data according to the instructions and sends it back to the client who requested it.

Conclusion

SQL query processing is a complex process that involves several steps, from parsing to execution. SQL Server is tuned using strategies like cost-based query optimization and execution plan caching to manage this operation as fast and effectively as possible. When working with SQL Server in real-world applications, knowing the intricacies of this process can help tremendously with diagnostics and performance optimization.

Recall that a significant portion of SQL Server's performance is dependent on elements other than the query processing phase. These include the appropriate use of indexes, current statistics, proper database design, well-structured queries, suitable hardware, and routine maintenance. But the first step to becoming an expert with this potent relational database engine is to have a comprehensive grasp of the process that SQL Server goes through, from accepting a request to producing a response.

HostForLIFEASP.NET SQL Server 2022 Hosting


 



European SQL Server 2022 Hosting :: How to Work with LEAD and LAG Window Functions in SQL?

clock May 21, 2024 10:28 by author Peter

Numerous analytical functions in SQL enable us to carry out intricate computations and data analysis jobs. SQL window functions called LEAD and LAG let us access data from other rows that are part of the same resultant row. They are frequently utilized in conjunction with the OVER clause, which specifies how the result set is divided and arranged. The value for the LAG function comes from a row that comes before the current one, whereas the value for the LEAD function comes from a row that comes after the current one.

The offset parameter, which indicates how many rows to advance or retract from the current row, is accepted by both functions. Let's examine the syntax, usage cases, and examples of LEAD and LAG to better understand how they work.

LEAD and LAG Functions Syntax

--LAEAD Function Syntax:
LEAD(column_name, offset, default_value) OVER (
    PARTITION BY partition_expression
    ORDER BY order_expression
)

--LAG Function Syntax:
LAG(column_name, offset, default_value) OVER (
    PARTITION BY partition_expression
    ORDER BY order_expression
)
  • column_name: Name of the column from which you want to retrieve the value.
  • offset: Number of rows to move forward (LEAD Function) or backward (LAG Function) from the current row. The default value is 1 and should be positive.
  • default_value (optional): Default value to return if the lead or lag value is NULL or if there is no subsequent row within the partition.
  • PARTITION BY (optional): Divides the result set into partitions based on the specified expression.
  • ORDER BY: Specifies the ordering of rows within each partition.

Lets take an example for LEAD and LAG functions.create a table named sales with the following structure:

-- Create salas table
CREATE TABLE sales (
    product VARCHAR(50),
    year INT,
    sales_amount DECIMAL(10,2)
);

-- Add some dummy data into the table
INSERT INTO sales (product, year, sales_amount) VALUES
('Apples', 2021, 1500.50),
('Bananas', 2021, 2500.75),
('Carrots', 2021, 3200.00),
('Apples', 2022, 1700.30),
('Bananas', 2022, 2900.20),
('Carrots', 2022, 3400.60),
('Apples', 2023, 1800.00),
('Bananas', 2023, 3100.45),
('Carrots', 2023, 3600.80),
('Oranges', 2021, 1100.25),
('Oranges', 2022, 1300.50),
('Oranges', 2023, 1400.75),
('Tomatoes', 2021, 1200.00),
('Tomatoes', 2022, 1500.35),
('Tomatoes', 2023, 1600.90);

Retrieving the next product's sales amount using LEAD function

SELECT product,year,sales_amount,
    LEAD(sales_amount, 1, 0) OVER (
        ORDER BY year, product
    ) AS next_product_sales
FROM
    sales ;

--Sample O/P
+----------+------+---------------+--------------------+
| product  | year | sales_amount  | next_product_sales |
+----------+------+---------------+--------------------+
| Apples   | 2021 |        1500.5 |             2500.75 |
| Bananas  | 2021 |        2500.75|              3200.0 |
| Carrots  | 2021 |         3200.0|             1100.25 |
| Oranges  | 2021 |        1100.25|              1200.0 |
| Tomatoes | 2021 |         1200.0|              1700.3 |
| Apples   | 2022 |         1700.3|              2900.2 |
| Bananas  | 2022 |         2900.2|              3400.6 |
|.......    .....          .......               ......
+----------+------+---------------+--------------------+

In above Query , we use the LEAD function to retrieve the sales amount of the next product in the same year. If there is no subsequent product, the default_value of 0 is returned.

Retrieving the prev product's sales amount using LAG function

SELECT product, year,sales_amount,
    LAG(sales_amount, 1, 0) OVER (
        ORDER BY year, product
    ) AS prev_product_sales
FROM sales;

--Sample O/P

+----------+------+---------------+--------------------+
| product  | year | sales_amount  | next_product_sales |
+----------+------+---------------+--------------------+
| Apples   | 2021 |        1500.5 |                0.0 |
| Bananas  | 2021 |        2500.75|             1500.5 |
| Carrots  | 2021 |         3200.0|             2500.75|
| Oranges  | 2021 |        1100.25|              3200.0|
| Tomatoes | 2021 |         1200.0|             1100.25|
| Apples   | 2022 |         1700.3|              1200.0|
| .....      ....           .....             ......
+----------+------+---------------+--------------------+

In above Query , we use the LEAD function to retrieve the sales amount of the next product in the same year. If there is no subsequent product, the default_value of 0 is returned.

Use Case

Suppose we want to analyze the sales data for each product, not only by comparing the current year's sales with the previous year but also by looking forward to the next year's sales. We can achieve this by combining the LEAD and LAG functions in a single query.

SELECT product, year, sales_amount,
    sales_amount - LAG(sales_amount, 1) OVER (
        PARTITION BY product
        ORDER BY year
    ) AS year_over_year_diff,
    LEAD(sales_amount, 1, 0) OVER (PARTITION BY product
        ORDER BY year) - sales_amount AS next_year_diff
FROM sales ORDER BY product, year;

-- Sample O/P
+----------+------+---------------+------------------+----------------+
| product  | year | sales_amount  | year_over_year_diff | next_year_diff |
+----------+------+---------------+------------------+----------------+
| Apples   | 2021 |        1500.5 |              NULL |          199.8 |
| Apples   | 2022 |        1700.3 |             199.8 |           99.7 |
| Apples   | 2023 |         1800.0|              99.7 |        -1800.0 |
| Bananas  | 2021 |        2500.75|              NULL |          399.45|
| Bananas  | 2022 |        2900.2 |             399.45|          200.25|
| Bananas  | 2023 |        3100.45|             200.25|        -3100.45|
| Carrots  | 2021 |         3200.0|              NULL |          200.6 |
| .....      ...            ....              .....          ....... ...
+----------+------+---------------+------------------+----------------+

In the LAG Functions we are calculating the year-over-year difference in sales by subtracting the previous year's sales amount from the current year's sales amount. In the LEAD Functions we are calculating the difference between the next year's sales amount and the current year's sales amount. By including both expressions in the same query, we can analyze the sales data for each product by looking at the year-over-year difference as well as the projected difference for the next year.

HostForLIFEASP.NET SQL Server 2022 Hosting


 

 



European SQL Server 2022 Hosting :: How to Work with LEAD and LAG Window Functions in SQL?

clock May 21, 2024 09:48 by author Peter

Numerous analytical functions in SQL enable us to carry out intricate computations and data analysis jobs. SQL window functions called LEAD and LAG let us access data from other rows that are part of the same resultant row. They are frequently utilized in conjunction with the OVER clause, which specifies how the result set is divided and arranged. The value for the LAG function comes from a row that comes before the current one, whereas the value for the LEAD function comes from a row that comes after the current one.

The offset parameter, which indicates how many rows to advance or retract from the current row, is accepted by both functions. Let's examine the syntax, usage cases, and examples of LEAD and LAG to better understand how they work.

LEAD and LAG Functions Syntax
--LAEAD Function Syntax:
LEAD(column_name, offset, default_value) OVER (
    PARTITION BY partition_expression
    ORDER BY order_expression
)

--LAG Function Syntax:
LAG(column_name, offset, default_value) OVER (
    PARTITION BY partition_expression
    ORDER BY order_expression
)

  • column_name: Name of the column from which you want to retrieve the value.
  • offset: Number of rows to move forward (LEAD Function) or backward (LAG Function) from the current row. The default value is 1 and should be positive.
  • default_value (optional): Default value to return if the lead or lag value is NULL or if there is no subsequent row within the partition.
  • PARTITION BY (optional): Divides the result set into partitions based on the specified expression.
  • ORDER BY: Specifies the ordering of rows within each partition.


Lets take an example for LEAD and LAG functions.create a table named sales with the following structure:
-- Create salas table
CREATE TABLE sales (
    product VARCHAR(50),
    year INT,
    sales_amount DECIMAL(10,2)
);

-- Add some dummy data into the table
INSERT INTO sales (product, year, sales_amount) VALUES
('Apples', 2021, 1500.50),
('Bananas', 2021, 2500.75),
('Carrots', 2021, 3200.00),
('Apples', 2022, 1700.30),
('Bananas', 2022, 2900.20),
('Carrots', 2022, 3400.60),
('Apples', 2023, 1800.00),
('Bananas', 2023, 3100.45),
('Carrots', 2023, 3600.80),
('Oranges', 2021, 1100.25),
('Oranges', 2022, 1300.50),
('Oranges', 2023, 1400.75),
('Tomatoes', 2021, 1200.00),
('Tomatoes', 2022, 1500.35),
('Tomatoes', 2023, 1600.90);

Retrieving the next product's sales amount using LEAD function.

SELECT product,year,sales_amount,
    LEAD(sales_amount, 1, 0) OVER (
        ORDER BY year, product
    ) AS next_product_sales
FROM
    sales ;

--Sample O/P
+----------+------+---------------+--------------------+
| product  | year | sales_amount  | next_product_sales |
+----------+------+---------------+--------------------+
| Apples   | 2021 |        1500.5 |             2500.75 |
| Bananas  | 2021 |        2500.75|              3200.0 |
| Carrots  | 2021 |         3200.0|             1100.25 |
| Oranges  | 2021 |        1100.25|              1200.0 |
| Tomatoes | 2021 |         1200.0|              1700.3 |
| Apples   | 2022 |         1700.3|              2900.2 |
| Bananas  | 2022 |         2900.2|              3400.6 |
|.......    .....          .......               ......
+----------+------+---------------+--------------------+

In above Query , we use the LEAD function to retrieve the sales amount of the next product in the same year. If there is no subsequent product, the default_value of 0 is returned.

Retrieving the prev product's sales amount using LAG function

SELECT product, year,sales_amount,
    LAG(sales_amount, 1, 0) OVER (
        ORDER BY year, product
    ) AS prev_product_sales
FROM sales;

--Sample O/P

+----------+------+---------------+--------------------+
| product  | year | sales_amount  | next_product_sales |
+----------+------+---------------+--------------------+
| Apples   | 2021 |        1500.5 |                0.0 |
| Bananas  | 2021 |        2500.75|             1500.5 |
| Carrots  | 2021 |         3200.0|             2500.75|
| Oranges  | 2021 |        1100.25|              3200.0|
| Tomatoes | 2021 |         1200.0|             1100.25|
| Apples   | 2022 |         1700.3|              1200.0|
| .....      ....           .....             ......
+----------+------+---------------+--------------------+

In above Query , we use the LEAD function to retrieve the sales amount of the next product in the same year. If there is no subsequent product, the default_value of 0 is returned.

Use Case

Suppose we want to analyze the sales data for each product, not only by comparing the current year's sales with the previous year but also by looking forward to the next year's sales. We can achieve this by combining the LEAD and LAG functions in a single query.

SELECT product, year, sales_amount,
    sales_amount - LAG(sales_amount, 1) OVER (
        PARTITION BY product
        ORDER BY year
    ) AS year_over_year_diff,
    LEAD(sales_amount, 1, 0) OVER (PARTITION BY product
        ORDER BY year) - sales_amount AS next_year_diff
FROM sales ORDER BY product, year;

-- Sample O/P
+----------+------+---------------+------------------+----------------+
| product  | year | sales_amount  | year_over_year_diff | next_year_diff |
+----------+------+---------------+------------------+----------------+
| Apples   | 2021 |        1500.5 |              NULL |          199.8 |
| Apples   | 2022 |        1700.3 |             199.8 |           99.7 |
| Apples   | 2023 |         1800.0|              99.7 |        -1800.0 |
| Bananas  | 2021 |        2500.75|              NULL |          399.45|
| Bananas  | 2022 |        2900.2 |             399.45|          200.25|
| Bananas  | 2023 |        3100.45|             200.25|        -3100.45|
| Carrots  | 2021 |         3200.0|              NULL |          200.6 |
| .....      ...            ....              .....          ....... ...
+----------+------+---------------+------------------+----------------+

We use the LAG Functions to calculate the sales difference between the current and prior years by deducting the former year's sales figure from the latter. We are computing the difference between the sales amount for the current year and the sales amount for the upcoming year in the LEAD Functions. We may evaluate the sales data for each product by comparing the year-over-year difference and the anticipated difference for the following year by putting both expressions in the same query.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: How to Convert Rows to CSV & Eliminate Duplicates in SQL Server?

clock May 8, 2024 07:06 by author Peter

Our user belongs to several roles, each of which has a number of privileges associated with it. The responsibilities and privileges that are linked to each user are what users want to see. In order to prevent redundant roles and privileges, the final product should have distinct roles and privileges. Basically, we have to remove duplicates from the string separated by commas.

Let's look at the data—both real and predicted.

Solution
One of two methods can be used to solve this problem in SQL Server utilizing the STRING_AGG() function.

Method 1
We may efficiently eliminate any duplicates by utilizing STRING_AGG() inside a subquery.

SELECT RL.UserName, STRING_AGG(RL.RoleName,', ') AS RoleName, PL.PrivilegeName
FROM (
    SELECT DISTINCT U.UserId, U.Name AS UserName, R.RoleName
    FROM #User U
    INNER JOIN #UserRolePrivilegeMap URPM
    ON U.UserId = URPM.UserId
    INNER JOIN #Role R
    ON URPM.RoleId = R.RoleId) RL
    INNER JOIN (
        SELECT P.UserId, STRING_AGG(P.PrivName,', ') AS PrivilegeName
        FROM (SELECT DISTINCT U.UserId, P.PrivName
              FROM #User U
              INNER JOIN #UserRolePrivilegeMap URPM
                ON U.UserId = URPM.UserId
              INNER JOIN #Privilege P
                ON URPM.PrvId = P.PrvId) P
              GROUP BY P.UserId) PL
        ON RL.UserId = PL.UserId
GROUP BY RL.UserName,PL.PrivilegeName

Expected Result

Method 2
Rows can be converted to Comma-Separated Values (CSV) by utilizing the grouped concatenation method with STRING_AGG(). Then, we can use the XQuery function distinct-values() to retrieve unique values from the XML instance after converting the CSV file to XML.

/*Using XQuery-function distinct-values() get only distinct values*/
SELECT UserName
     ,STUFF((RoleName.query('for $x in distinct-values(/x/text())return <x>{concat(",", $x)}</x>').value('.','varchar(250)')),1,1,'') AS RoleName
     ,STUFF((PrivilegeName.query('for $x in distinct-values(/x/text())return <x>{concat(",", $x)}</x>').value('.','varchar(250)')),1,1,'') AS PrivilegeName
FROM(
SELECT U.Name As UserName
    ,CAST('<x>' + REPLACE(STRING_AGG(R.RoleName,','),',','</x><x>') + '</x>' AS XML) AS   RoleName
    ,CAST('<x>' + REPLACE(STRING_AGG(P.PrivName,','),',','</x><x>') + '</x>' AS XML) AS   PrivilegeName
FROM #User U
INNER JOIN #UserRolePrivilegeMap URPM
ON U.UserId = URPM.UserId
INNER JOIN #Role R
ON URPM.RoleId = R.RoleId
INNER JOIN #Privilege P
ON URPM.PrvId = P.PrvId
GROUP BY U.Name)A

Step 1: Transform the real data into the CSV format indicated below by using the STRING_AGG() function.

Step 2. Next, convert the CSV into XML format.

Step 3. Now, utilize the XQuery function distinct-values() to extract unique values from the XML instance.

Tables and Insert Scripts
/*Create USER Table and Insert Data*/
DROP TABLE IF EXISTS #User
CREATE TABLE #User (UserId INT, Name VARCHAR(50))
INSERT INTO #User (UserId, Name)
VALUES (1, 'John'),
     (2, 'Peter'),
     (3, 'David')

/*Create ROLE Table and Insert Data*/
DROP TABLE IF EXISTS #Role
CREATE TABLE #Role (RoleId INT, RoleName VARCHAR(50))
INSERT INTO #Role (RoleId, RoleName)
VALUES (1, 'IT Admin'), (2, 'Developer'), (3, 'Sr.Developer'), (4, 'Lead'), (5, 'Sr.Lead')

/*Create PRIVILEGE Table and Insert Data*/
DROP TABLE IF EXISTS #Privilege
CREATE TABLE #Privilege (PrvId INT, PrivName VARCHAR(50))
INSERT INTO #Privilege (PrvId, PrivName)
VALUES (1, 'Default'), (2, 'Admin'), (3, 'Creator'), (4, 'Read'), (5, 'Write'), (6, 'Owner')

/*Create USERROLEPRIVILEGEMAP Table and Insert Data*/
DROP TABLE IF EXISTS #UserRolePrivilegeMap
CREATE TABLE #UserRolePrivilegeMap(UserId INT, RoleId INT, PrvId INT)
INSERT INTO #UserRolePrivilegeMap (UserId, RoleId, PrvId)
VALUES (1,1,1), (1,1,2), (1,2,1), (1,2,2), (1,2,4), (1,2,5), (1,4,2),
     (1,4,4), (2,1,1), (2,1,5), (2,1,3), (2,5,1), (2,5,2), (2,5,6),
     (2,5,5), (2,5,3), (3,1,1), (3,1,6), (3,1,5), (3,1,4), (3,3,1),
     (3,3,2), (3,3,4), (3,3,5), (3,3,6)

/*Join all tables and get the actual data*/
SELECT U.Name AS UserName
    ,R.RoleName
    ,P.PrivName AS PrivilegeName
FROM #User U
INNER JOIN #UserRolePrivilegeMap URPM
ON U.UserId = URPM.UserId
INNER JOIN #Role R
ON URPM.RoleId = R.RoleId
INNER JOIN #Privilege P
ON URPM.PrvId = P.PrvId

Conclusion
Efficiently convert row data into comma-separated values using SQL Server's string aggregate function, ensuring duplicates are removed for streamlined data representation and integrity.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: Setting Up a Calendar in SQL Server

clock May 3, 2024 08:20 by author Peter

Using SQL Server's built-in capabilities, I've made a basic calendar in the example below.

Temporary SQL Server Tables
The instantaneous result sets that are queried repeatedly can be stored in the temporary tables.

Setting up makeshift tables

SELECT INTO and CREATE TABLE statements are the two methods that SQL Server offers for creating temporary tables.

CREATE OR ALTER PROCEDURE calender(@start_date DATE, @end_date DATE)
AS
BEGIN
    DECLARE @DateDiff INT;
    DECLARE @count INT;
    SET @count = 0; -- DAY ONE COUNT OF START DATE
    SET @DateDiff = DATEDIFF(DAY, @start_date, @end_date); -- DIFF BTW TWO DATE
    DROP TABLE IF EXISTS #temp_cal;
    CREATE TABLE #temp_cal(
        id INT IDENTITY(1,1) PRIMARY KEY,
        d_date DATE,
        end_date DATE
    );
    WHILE @count <= @DateDiff BEGIN
        INSERT INTO #temp_cal(d_date, end_date) VALUES(DATEADD(DAY, @count, @start_date), @end_date);
        SET @count = @count + 1;
    END;
    SELECT DAY(d_date) AS 'DAY',
           MONTH(d_date) AS 'MONTH',
           DATENAME(weekday, d_date) AS 'DAY_NAME',
           DATENAME(month, d_date) AS 'MONTH_NAME',
           DATENAME(year, d_date) AS 'YEAR',
           d_date AS 'DATE',
           DATENAME(week, d_date) AS 'WEEK',
           DATEPART(DY, d_date) AS 'DAY_OF_YEAR',
           DATEPART(ISO_WEEK, d_date) AS 'ISO_WEEK',
           DATEPART(wk, d_date) AS 'US WEEK',
           DATEPART(wk, d_date) AS 'WEEK_OF_YEAR',
           DATEDIFF(DAY,d_date, end_date) AS 'NUMBER_OF_DAYS_IN_MONTH',
           DATEDIFF(WEEK,d_date,end_date) AS 'WEEKS_IN_YEAR',
           DATEDIFF(MONTH,d_date,end_date) AS 'MONTHS_IN_YEAR',
           FORMAT(d_date, 'D', 'en-US' ) AS 'CULTURES_FOR_US'
    FROM #temp_cal;
END;
EXEC calender @start_date = '2023-01-01', @end_date = '2023-12-31';

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: SQL Temporary Tables: Types, Syntax, and Application

clock April 26, 2024 07:30 by author Peter

Temporary tables, often known as temp tables, are widely used by database managers and developers. They function similarly to ordinary tables and are kept in the tempDB database, enabling you to choose, add, and remove data as needed. In the event that they are produced during a stored procedure, they will be removed once the operation is finished.

What does a SQL Server temporary table mean?

In SQL, a temporary table is a database table that is only temporarily present on the database server. For a set amount of time, a temporary table retains some of the data from a regular table.

Temporary tables are useful when you need to regularly work with a small subset of the many records in a table. Sometimes it is not necessary to filter the data several times in order to obtain the subset; instead, you can filter the data only once and save it in a temporary table.

Temporary tables and permanent tables are almost the same. They are created in TempDB and deleted right away after the last connection to the query window that created the table is closed. Temporary Tables can be used to process and store interim findings. Temporary tables are used when it is necessary to store data temporarily.

How to Create a Temporary SQL Table?
CREATE TABLE #tmpEmp
(
  Id INT,
  Name NVARCHAR(50),
  Rank INT
)


Types of Temporary Tables in SQL
There are a couple of temporary tables in SQL.

  • Local Temporary Tables
  • Global Temporary Tables

Local Temporary Tables
TempDB contains local temporary table storage. They are destroyed automatically at the conclusion of the operation or session, and they are only available to the one who created them.

For instance, after a local temporary table called #tmpEmp is created, the user's ability to manipulate the table is limited until the query window's final connection is closed. They can be recognized by the prefix #, such as #table name, and the same temporary table can be created with the same name in many windows.

A local temporary table can be created with the CREATE TABLE command, where the table name is prefixed with a single number sign (#table name).

Syntax
The following is the syntax in SQL Server (Transact-SQL) for making a LOCAL TEMPORARY TABLE.
CREATE TABLE #tablename
(
  column1 datatype [ NULL | NOT NULL ],
    column2 datatype [ NULL | NOT NULL ],
    column3 datatype [ NULL | NOT NULL ],
    ...
    columnn datatype [ NULL | NOT NULL ]
);

Let’s insert some data entry in a temporary table.

INSERT INTO #tmpEmp ([Name], [Rank])
VALUES ('Peter Scott', 196)

Let’s check the results, whether it’s stored or not.
SELECT Id, [Name], [Rank] FROM #tmpEmp

Where do I store the temporary table on the SQL Server?

Another way to create a temporary table in SQL is with the SELECT INTO statement. For the illustration, go through the following things.
SELECT Id, [Name], [Rank]
INTO #tmpEmp1
FROM #tmpEmp

Temporary tables are useful when you need to regularly work with a small subset of the many records in a table. Sometimes it is not necessary to filter the data several times in order to obtain the subset; instead, you can filter the data only once and save it in a temporary table.

Temporary tables and permanent tables are almost the same. They are created in TempDB and deleted right away after the last connection to the query window that created the table is closed. Temporary Tables can be used to process and store interim findings. Temporary tables are used when it is necessary to store data temporarily.

Global Temporary Table in SQL Server
Additionally, they are kept in tempDB. These tables belong to the category of transient tables that are concurrently accessible to all users and sessions. When the final session using the temporary table concludes, they are automatically removed. These tables do not exist in the system catalogs and are not persistent.

A global temporary table is created with the CREATE TABLE command, and the table name is preceded by a double number sign (##table name).

Syntax
The following is the syntax in SQL Server (Transact-SQL) for making a global temporary table.
CREATE TABLE ##tablename
(
  column1 datatype [ NULL | NOT NULL ],
    column2 datatype [ NULL | NOT NULL ],
    column3 datatype [ NULL | NOT NULL ],
    ...
    columnn datatype [ NULL | NOT NULL ]
);


Create a global temporary table.

CREATE TABLE ##tmpEmployee
(
  Id INT NOT NULL IDENTITY (1, 1),
  [Name] NVARCHAR(50),
  [Rank] INT
)

Let’s insert some data entry in a global temporary table.

INSERT INTO ##tmpEmployee ([Name], [Rank])
VALUES ('Peter', 196)
, ('Daniel', 1211)
, ('Maria', 1250)
, ('Laura', 1280)

Let’s check the results, whether it’s stored or not.
SELECT Id, [Name], [Rank] FROM ##tmpEmployee


Where do I store the temporary table on the SQL Server?


Delete Temporary and Global Temporary SQL Table
When possible, we should directly remove temporary tables rather than waiting for them to be deleted automatically when the connection is closed. in order to expeditiously release the temp resources.

Syntax
DROP TABLE TableName

As an illustration,
DROP TABLE #tmpEmp, #tmpEmp1, ##tmpEmployee

Let's check in the Temporary Tables from SQL Server.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: Recognizing Database Management System Layers

clock April 22, 2024 08:01 by author Peter

This succinct article will help you understand the fundamentals of three-tier architecture in database management systems (DBMS). Discover the layers: data management, application, and display. Each has a specific function in coordinating smooth data exchanges. Discover the benefits, tenets, and real-world applications of this architectural approach, which will enable you to create and implement dependable and expandable database solutions.

Outside Layer
In a database management system's three-tier architecture, the external level, sometimes referred to as the user interface layer, is the highest tier (DBMS). By presenting data in an approachable and comprehensible style, it acts as the entry point for end users to engage with the system. Forms, reports, dashboards, and graphical user interfaces that are customized for particular user roles and preferences are all included in this layer. Users may rapidly access, retrieve, update, and edit data thanks to the external level's streamlined and intuitive interface, which abstracts the intricacies of the underlying database structure.

Conceptual Depth
In the three-tier design of a database management system (DBMS), the conceptual level is the intermediary layer, situated between the external and physical levels. It abstracts the underlying physical storage information from the end users and applications that interact with the system, representing the logical perspective of the database. At this stage, implementation details are not as important as specifying the general structure, organization, and relationships of the data within the database. Entity-relationship diagrams (ERDs) and other conceptual data models, which depict the entities, characteristics, and relationships in the database schema, are commonly used to represent the conceptual level. Because of its capacity to support data independence, this abstraction improves flexibility and maintainability by enabling modifications to the database structure without impacting the external applications.

Physical Level
At the basis of a database management system's (DBMS) three-tier design, the physical level is in charge of overseeing the real data storage and retrieval on the physical storage devices. This tier communicates directly with the disks, RAM, and storage devices that make up the hardware (Data Store). Data is arranged and stored physically using techniques like indexing, segmentation, and storage optimization that are designed for effective access and retrieval. To maintain data integrity, security, and performance, this tier also includes features like disk management techniques, data compression, and encryption. By converting logical data structures into physical storage, the physical level works directly with low-level storage methods, in contrast to the conceptual and external levels, which abstract away from specifics of physical implementation.

These layers communicate with one another in an organized manner, with each layer carrying out particular duties and transferring information or requests to the subsequent layer as needed. Database systems can be designed and managed with modularity, scalability, and maintainability because to this division of responsibilities.

In summary

The conceptual layer establishes logical data structures, the physical layer oversees actual data storage, and the external layer offers user-friendly interfaces in a three-tier architecture. By working together, they produce a coherent framework for effective data processing and administration that improves database systems' scalability, maintainability, and security.

HostForLIFEASP.NET SQL Server 2022 Hosting

 

 


 



European SQL Server 2022 Hosting :: Table-Valued Parameters in SQL Server

clock April 18, 2024 07:32 by author Peter

Table-valued parameters are similar to parameter arrays in that they eliminate the need to build a temporary table or employ numerous parameters by enabling the sending of multiple rows of data to a Transact-SQL statement or routine, such as a stored procedure or function. This article explains how to use a Microsoft SQL Server table-valued argument in a stored procedure.

User-Defined Table Types are tables designed to hold temporary data and are used to declare table-valued parameters. Thus, you can make a Type Table and send it as a parameter for a process that requires, for instance, to receive a list of items. For demonstration purposes, I created a table of products that will be used in the following examples. This is the structure of the Products table.

CREATE TABLE Products (
    Id INT NOT NULL,
    Name NVARCHAR(100) NULL,
    Description NVARCHAR(200) NULL,
    CreatedDate DATETIME2 NOT NULL CONSTRAINT [DF_Products_CreatedDate] DEFAULT GETUTCDATE(),
    CreatedBy NVARCHAR(150) NOT NULL,
    CONSTRAINT PK_Product PRIMARY KEY(Id)
 );

Adding Just One Product

Imagine a situation in which a user opens your app and needs to register just one item. And to do that, a process for adding this product to the database must exist. You must first develop a procedure to add a single product to the products database in order to accomplish that. The ID, Name, Description, and the user who created the product should be sent as parameters to this procedure.

CREATE PROCEDURE InsertProduct (
        @Id INT,
        @Name NVARCHAR(100),
        @Description NVARCHAR(200),
        @User NVARCHAR(150)
    )
AS
BEGIN
    INSERT INTO Products (
        Id,
        Name,
        Description,
        CreatedBy
    )
    VALUES (
          @Id,
          @Name,
          @Description,
          @User
    );
END

For testing this procedure, we can run some scripts adding aBEGIN TRANSACTIONwith aROLLBACKin the end (this is useful when testing to avoid needing to delete/change/revert the data on each test that is made), and inside of that, we can execute the statements to insert the products.

BEGIN TRANSACTION
SELECT * FROM Products;
EXEC InsertProduct 1, 'Galaxy S22', 'Smartphone Samsung Galaxy S22', 'Henrique';
EXEC InsertProduct 2, 'iPhone 14 Pro', 'Smartphone Apple iPhone 14 Pro', 'Henrique';
EXEC InsertProduct 3, 'iPhone 13 Pro', 'Smartphone Apple iPhone 13 Pro', 'Henrique';
SELECT * FROM Products;
ROLLBACK

On line 1, there is the BEGIN transaction statement, and this is to allow us to revert the changes at the end of the execution.
On line 3, we run aSELECTquery to check the data in the products table.
On lines 5 up to 7, we run the InsertProductprocedure to insert the products. Note that in order to insert three products, we needed to execute the procedure three times, once for each product.
On line 9, we run a new select query to check the data in the products table.
On line 11, there is the rollback statement, to revert the changes that were made.

This is the result.

Adding a Large Number of Products
Now imagine a situation where you receive a list of products to add to the product table, rather than just one product. In this instance, a Table Type parameter—which functions as a sort of array of products—should be included in the code. The properties that are listed in the products table should also be included in the type table's columns. For instance, the type table will include the columns Name and Description.

CREATE TYPE ProductType AS TABLE (
  Id INT NOT NULL,
  Name NVARCHAR(100) NULL,
  Description NVARCHAR(200) NULL,
  PRIMARY KEY(Id)
);

Once the Type Table is created, it’s possible to see it here.

The type table and the user entering the records will be the two arguments for the new method we'll be creating, InsertProducts (plural). The Insert Products process is as follows.

CREATE PROCEDURE InsertProducts (
    @Products ProductType READONLY,
    @User NVARCHAR(150)
  )
AS
BEGIN
  INSERT INTO Products (
    Id,
    Name,
    Description,
    CreatedBy
  )
  SELECT
    prd.Id,
    prd.Name,
    prd.Description,
    @User
  FROM @Products prd
END

  • On line 2, there is the parameter@Productsof typeProductType, and it must have theREADONLYkeyword.
  • On line 3, there is the parameter@CreatedByof typeNVARCHAR, which is for saving the name of the user who runs the procedure to insert products. Note: this second parameter is here only to demonstrate that even when a procedure has a type table as a parameter, is still possible to use more parameters of different types — in case you need to get the user who executed the SQL script, you can use theSYSTEM_USERin the SQL Script, instead of receiving the user as a parameter.
  • On line 7, the INSERT statement begins.
  • On line 13, there is the select query, which will read the data from the table type that was received as a parameter (@Products), and it will use the data to insert it into the product table.

Let’s test the procedure now. For that, let’s use the TRANSACTION with aROLLBACKin the end, as we did before, and for testing, we will add some data into the type table and execute the procedure by sending this type table as a parameter.

BEGIN TRANSACTION
  SELECT * FROM Products;
  DECLARE @Products ProductType;
  INSERT INTO @Products
  SELECT 1, 'Galaxy S22+', 'Smartphone Samsung Galaxy S22+'
  UNION ALL
  SELECT 2, 'iPhone 14 Pro', 'Smartphone Apple iPhone 14 Pro'
  UNION ALL
  SELECT 3, 'iPhone 13 Pro', 'Smartphone Apple iPhone 13 Pro';
  EXEC InsertProducts @Products, 'Henrique';
  SELECT * FROM Products;
ROLLBACK

  • On line 1, a new transaction is started.
  • On line 3, we first run select to check the data we have in the product table before running the procedure.
  • On line 5, the variable of typeProductTypeis declared.
  • On lines 7 up to 12, three records are inserted into the@Productsvariable.
  • On line 14, the procedure to insert products is executed and receives as parameters the Type table variable (@Products) and a user ('Henrique').
  • On line 16, a new selection, the Productstable is executed, and the three records are expected to be inserted into the table.
  • On line 18, a rollback is executed to revert the changes.

This is the result.

Since this is a new table, it is to be expected that the initial select query returned no results. The three products were added to the product table using the second select query, which was run subsequent to the insert function. For the scenarios in which the product table contains records, let's run another test. Let's add additional data to the table to support that.

INSERT INTO Products (Id, Name, Description, CreatedBy)
  VALUES (1, 'Galaxy S21+', 'Smartphone Samsung Galaxy S21+', 'Henrique'),
          (2, 'Galaxy S22', 'Smartphone Samsung Galaxy S22', 'Henrique'),
    (3, 'Galaxy S22+', 'Smartphone Samsung Galaxy S22+', 'Henrique');


Now, let’s do another test, adding new records using the InsertProductsprocedure.

BEGIN TRANSACTION
  SELECT * FROM Products;
  DECLARE @Products ProductType;
  INSERT INTO @Products
  SELECT 4, 'iPhone 13 Pro', 'Smartphone Apple iPhone 13 Pro'
  UNION ALL
  SELECT 5, 'iPhone 14 Pro', 'Smartphone Apple iPhone 14 Pro';
  EXEC InsertProducts @Products, 'Henrique';
  SELECT * FROM Products;
ROLLBACK

  • On line 3, the first SELECT will return the records that were previously added to the products table.
  • On line 5, the variable of typeProductTypeis declared.
  • On lines 7 up to 10, two products are added to theProductTypetable, which will be used as a parameter to the procedure.
  • On line 12, the procedureInsertProductsis executed.
  • On line 14, a second selects executed, to return the products.

This is the result.


The new records with Ids 4 and 5 were added to the product table as anticipated.

In summary

It is feasible to build a stored method or function that requires a list of data as a parameter by declaring User-Defined Table Types and employing Tabled-Valued Parameters. This makes it feasible to deliver a large amount of data with a single request rather than having to run the function numerous times (one for each data).

HostForLIFEASP.NET SQL Server 2022 Hosting

 


 



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in