European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

European SQL Server 2022 Hosting :: Using Tablock to Unlock Faster INSERT Operations in SQL Server

clock December 18, 2024 07:17 by author Peter

Performance is frequently the main issue in any SQL Server system when working with big datasets. The INSERT operation is one frequent operation that occasionally turns into a bottleneck. The time required to input data increases with its size, which can have a major effect on system performance and user experience in general. The usage of the TABLOCK hint is one of the many methods and improvements that SQL Server offers to help speed up data insertions. When working with huge datasets or when parallelism is crucial, this straightforward yet effective method can significantly increase the pace of your INSERT operations.

Let’s explore how using the TABLOCK hint works, its benefits, and how you can apply it to your own operations, with examples from the HFL database.

What is the TABLOCK Hint?
The TABLOCK hint is a table-level lock hint that forces SQL Server to take a schema modification (Sch-M) lock on the target table when performing an INSERT, UPDATE, or DELETE operation. This hint ensures that the table is locked for the duration of the operation, which can help speed up data loading by minimizing logging and reducing contention.

A key benefit of the TABLOCK hint is that it reduces the amount of log space used during the operation, as the minimal logging mechanism is activated. This means that SQL Server does not have to log each individual row insertion, but rather just the metadata for the bulk operation. As a result, this significantly reduces the overhead and speeds up data loading.

Additionally, because the table is locked at the schema level, it allows SQL Server to parallelize the operation, leading to faster execution times. This is particularly useful for large-scale data-loading tasks.

When to Use TABLOCK Hint
The TABLOCK hint is ideal for scenarios where:

  • You are inserting a large number of rows into a table.
  • You can afford to lock the table for the duration of the operation (i.e., no other transactions need access to the table while the insert is in progress).
  • You want to reduce the logging overhead and speed up bulk insertions.
  • You want to use parallel insertions to take advantage of SQL Server's ability to use multiple threads for data loading.


It’s also important to note that the TABLOCK hint works well with temporary tables, so you can take advantage of these performance benefits when working with temp tables, often used in ETL processes or batch operations.

Benefits of Using TABLOCK

  • Improved Performance: The primary benefit of using the TABLOCK hint is the performance improvement during large INSERT operations. By reducing the amount of logging, SQL Server can insert rows much faster.
  • Parallel Insertion: With TABLOCK, SQL Server can use parallelism to load the data, speeding up the operation of systems with sufficient resources.
  • Reduced Logging Overhead: Since SQL Server logs less information, the system consumes less log space, which can be crucial when working with large datasets.
  • Works with Temp Tables: You can apply TABLOCK to temporary tables as well, giving you the same performance benefits for in-memory operations.


Example
Let’s consider a scenario where we need to insert a large number of rows from the Sales.SalesOrderDetail table into the Sales.SalesOrderDetailTemp table in the HFL database.
Create table script for Sales.SalesOrderDetailTem
USE [HFL]
GO

DROP TABLE IF EXISTS [Sales].[SalesOrderDetailTemp]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [Sales].[SalesOrderDetailTemp](
    [SalesOrderID] [int] NOT NULL,
    [SalesOrderDetailID] [int] NOT NULL,
    [CarrierTrackingNumber] [nvarchar](25) NULL,
    [OrderQty] [smallint] NOT NULL,
    [ProductID] [int] NOT NULL,
    [SpecialOfferID] [int] NOT NULL,
    [UnitPrice] [money] NOT NULL,
    [UnitPriceDiscount] [money] NOT NULL,
    [LineTotal]  [money] NULL,
    [rowguid] [uniqueidentifier] ROWGUIDCOL  NOT NULL,
    [ModifiedDate] [datetime] NOT NULL,
 CONSTRAINT [PK_SalesOrderDetailTemp_SalesOrderID_SalesOrderDetailTempID] PRIMARY KEY CLUSTERED
(
    [SalesOrderID] ASC,
    [SalesOrderDetailID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO


Without the TABLOCK hint, this operation may take a considerable amount of time, especially when the table is large and the database is under load.

Here’s a basic example of how you can speed up the INSERT operation by using the TABLOCK hint.
USE HFL
GO

SET STATISTICS TIME, IO ON

SET NOCOUNT ON

INSERT INTO Sales.SalesOrderDetailTemp
SELECT *
FROM Sales.SalesOrderDetail;

Truncate the table.
USE HFL
GO

TRUNCATE TABLE Sales.SalesOrderDetailTemp


Now, let’s modify the query to use the TABLOCK hint.
USE HFL
GO

SET STATISTICS TIME, IO ON

SET NOCOUNT ON

INSERT INTO Sales.SalesOrderDetailTemp
WITH (TABLOCK)
SELECT *
FROM Sales.SalesOrderDetail;

Comparison
Execution 1 (without TABLOCK) took longer, with higher CPU and elapsed time (204 ms and 284 ms), indicating a slower operation. Execution 2 (with TABLOCK) performed better, completing in 125 ms CPU time and 157 ms elapsed time, making the TABLOCK version more efficient in this case.
Considerations When Using TABLOCK

While the TABLOCK hint can greatly improve performance, it’s important to be aware of some considerations:

  • Table Locking: The TABLOCK hint locks the entire table for the duration of the operation. This means that other transactions cannot access the table until the INSERT operation is complete, so be sure that this behavior aligns with your application’s requirements.
  • Transaction Log Growth: Although TABLOCK reduces the amount of logging, it still logs certain details of the operation. If you’re inserting a massive amount of data, you may need to monitor transaction log growth and ensure that you have enough log space available.
  • Not Suitable for OLTP Workloads: The TABLOCK hint is more suited to batch operations or bulk-loading scenarios. It may not be appropriate for transactional systems that require frequent concurrent access to the table.

Conclusion
If you are working with large datasets and want to speed up your INSERT operations in SQL Server, the TABLOCK hint can be a game-changer. By reducing logging overhead and enabling parallel insertions, it helps improve performance and can significantly reduce the time it takes to load data.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: Cleaning Data in SQL Server

clock November 25, 2024 06:45 by author Peter

In order to guarantee the accuracy, consistency, and dependability of the data used for analysis, reporting, or machine learning, data cleansing is an essential stage in the data preparation process. Inaccurate judgments, faulty models, and ineffective decision-making can result from low-quality data.

1. Removing Duplicates
Duplicates in datasets can skew results, inflate counts, or cause redundancy. SQL Server provides ways to remove duplicates using DISTINCT or GROUP BY.
Example. Identify and remove duplicate rows in the Person.EmailAddress table.

If you want to remove duplicates permanently, you can use CTE (Common Table Expressions) with ROW_NUMBER().
USE Hostforlife;
GO

SELECT DISTINCT
    EmailAddressID,
    EmailAddress
FROM
    Person.EmailAddress;

WITH CTE AS (
    SELECT
        EmailAddressID,
        EmailAddress,
        ROW_NUMBER() OVER (PARTITION BY EmailAddress ORDER BY EmailAddressID) AS RowNum
    FROM
        Person.EmailAddress
)
DELETE FROM CTE
WHERE RowNum > 1;

2. Handling Missing Values
Missing values can impact analysis and decision-making. Use SQL functions like IS NULL, COALESCE, or CASE to identify and handle missing data. Example. Replace missing PhoneNumber values in the Person.Use the personPhone table with a default value or remove rows with missing values.
USE 
Hostforlife;
GO

SELECT
    BusinessEntityID,
    PhoneNumber,
    COALESCE(PhoneNumber, 'Unknown') AS CleanedPhoneNumber
FROM
    Person.PersonPhone;

DELETE FROM
    Person.PersonPhone
WHERE
    PhoneNumber IS NULL;

3. Correcting Data Inconsistencies
Inconsistencies like variations in case or formatting can lead to errors in joins or grouping. SQL Server string functions like UPPER, LOWER, or REPLACE can help standardize data.

Example. Standardize FirstName values in the Person.Person table to uppercase. Replace incorrect substrings in email addresses.
USE 
Hostforlife;
GO

UPDATE Person.Person
SET FirstName = UPPER(FirstName);

UPDATE Person.EmailAddress
SET EmailAddress = REPLACE(EmailAddress, '@
Hostforlife.eu', '@Hostforlife.eu
');

4. Standardizing Data Formats
Standardized data formats ensure consistency and compatibility across systems. Functions like CAST and CONVERT are often used for this purpose. Example. Convert ModifiedDate in the Sales.SalesOrderHeader table to a specific format. Here, the date format 101 converts the date to MM/DD/YYYY.

USE
Hostforlife
GO

SELECT SalesOrderID,
       CONVERT(VARCHAR(10), ModifiedDate, 101) AS FormattedDate
FROM Sales.SalesOrderHeader;

5. Removing Outliers

Outliers can distort statistical analyses and trends. Use statistical functions and filtering to identify and exclude them.

Example. Remove outliers based on TotalDue in the Sales.SalesOrderHeader table.
USE
Hostforlife
GO

SELECT *
FROM Sales.SalesOrderHeader
WHERE TotalDue BETWEEN
      (SELECT PERCENTILE_CONT(0.05) WITHIN GROUP (ORDER BY TotalDue) OVER ()) AND
      (SELECT PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY TotalDue) OVER ());

6. Validating Data
Data validation ensures data integrity by applying constraints or rules. SQL Server constraints like NOT NULL, UNIQUE, and CHECK are essential for enforcing data quality.
Example. Enforce data integrity when creating a new table.
USE 
Hostforlife;
GO

CREATE TABLE Sales.Promotions (
    PromotionID INT PRIMARY KEY,
    PromotionName NVARCHAR(100) NOT NULL,
    DiscountPercentage DECIMAL(5, 2) CHECK (DiscountPercentage BETWEEN 0 AND 100),
    StartDate DATE NOT NULL,
    EndDate DATE NOT NULL,
    CONSTRAINT CK_Promotions_EndDate CHECK (EndDate > StartDate)
);

Alternatively, validate existing data using conditional queries.
SELECT *
FROM Sales.Promotions
WHERE DiscountPercentage < 0
   OR DiscountPercentage > 100;


Conclusion
Data cleansing is an ongoing process and a crucial component of the data lifecycle. By removing duplicates, handling missing values, correcting inconsistencies, standardizing formats, removing outliers, and validating data, you can significantly improve the quality of your data. These techniques, demonstrated using the AdventureWorks2022 database, can be applied to real-world datasets to ensure accurate and actionable insights. By incorporating these practices into your data workflows, you can ensure that your analysis, reports, and machine learning models are built on a solid foundation of clean data.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: The Best and Worst New T-SQL Improvements for SQL Server

clock October 16, 2024 06:55 by author Peter

The syntax, application cases, and basic examples of these two new T-SQL enhancements—LEAST and GREATEST—will be covered in this article. We will also look at how they differ from conventional methods and how they affect regular database operations. Many querying jobs can be made more efficient by these functions for SQL developers, data engineers, and DBAs.

Introduction
Before introducing these functions in SQL Server 2022, finding the minimum or maximum value from a set of columns or expressions required more complex logic or verbose CASE statements. The LEAST and GREATEST functions simplify these operations, offering more intuitive and concise ways to achieve the desired outcomes.

Syntax Overview
    LEAST: Returns the smallest value from a list of expressions.

    LEAST (
        expression1,
        expression2,
        ...,
        expressionN
    )

GREATEST: Returns the largest value from a list of expressions.

GREATEST (
    expression1,
    expression2,
    ...,
    expressionN
)

Both functions can compare multiple arguments (expressions), and their arguments can be any data type that supports comparison (numeric, date, or even string types). The result will be of the same data type as the input expressions.

LEAST and GREATEST vs. Traditional Methods

Prior to SQL Server 2022, SQL developers typically had to use CASE statements or a combination of MIN and MAX functions along with complex logic to compare multiple columns or values. While these methods worked, they were cumbersome and often led to less readable queries.

Examples of Traditional Methods
To compare multiple columns before SQL Server 2022.
SELECT
    CASE
        WHEN Column1 <= Column2 AND Column1 <= Column3 THEN Column1
        WHEN Column2 <= Column1 AND Column2 <= Column3 THEN Column2
        ELSE Column3
    END AS SmallestValue,

    CASE
        WHEN Column1 >= Column2 AND Column1 >= Column3 THEN Column1
        WHEN Column2 >= Column1 AND Column2 >= Column3 THEN Column2
        ELSE Column3
    END AS LargestValue
FROM MyTable;


This logic can quickly become unwieldy if you have many columns to compare. Now, with the new functions in SQL Server 2022, this task becomes much simpler.

Using LEAST and GREATEST

The same example uses LEAST and GREATEST.
SELECT
    LEAST(Column1, Column2, Column3) AS SmallestValue,
    GREATEST(Column1, Column2, Column3) AS LargestValue
FROM
    MyTable;


Examples
Example 1. Using LEAST to Compare Different Tax Rates

Let's assume you want to compare the TaxAmt for several sales orders across different order dates and determine the minimum tax amount for each sales order. The Sales. The salesOrderHeader table includes details about each sales order, including the tax amount.

We’ll use the LEAST function to identify the minimum tax amount for several orders.
USE [Hostforlife];
GO

SELECT
    SalesOrderID,
    LEAST(TaxAmt, Freight, SubTotal) AS SmallestAmount
FROM
    Sales.SalesOrderHeader WITH (NOLOCK)
WHERE
    SalesOrderID IN (43659, 43660, 43661);

Output

Example 2. Use GREATEST to Find the Maximum Bonus, Sick Leave, and Vacation Hours
Let’s now use the HumanResources.Employee table. We want to compare each employee's VacationHours, SickLeaveHours, and Bonus (we’ll assume a fixed bonus column for demonstration) and find out which of these values is the largest for each employee.
USE [Hostforlife];
GO

SELECT
    BusinessEntityID,
    GREATEST(VacationHours, SickLeaveHours, 10) AS MaxBenefit
FROM
    HumanResources.Employee
WHERE
    BusinessEntityID BETWEEN 1 AND 10;

Output

Performance Considerations
While the LEAST and GREATEST functions simplify queries, it’s important to consider their performance impact in large datasets or complex queries. Typically, these functions are efficient, especially compared to more verbose alternatives such as CASE statements. However, in cases where you're comparing large datasets or many columns, performance tuning may still be necessary.

Use Cases for DBAs, Data Engineers, and SQL Developers
For SQL developers, LEAST and GREATEST streamline common query patterns, making code easier to read and maintain. This is especially beneficial when writing queries that compare multiple values across different columns.

For data engineers, these functions simplify data pipeline transformations, where selecting the minimum or maximum value from a set of columns is common. Whether dealing with dates, numeric data, or even strings, LEAST and GREATEST can reduce the complexity of transformation logic.

DBAs will also appreciate the reduced complexity when working with large, production-level queries that must compare multiple values or optimize reporting views. These functions can reduce the need for additional table scans or subqueries in complex reports.

Conclusion

Whether you're comparing sales figures, tracking project dates, or handling any scenario that involves finding the smallest or largest value among multiple expressions, LEAST and GREATEST make the job easier. By understanding their behavior, particularly with respect to NULL values, you can ensure that your queries are both efficient and accurate.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: Concurrency in SQL Server

clock September 23, 2024 08:24 by author Peter

Concurrency
The capacity of various components or units of a program, algorithm, or problem to be executed partially or out of sequence without changing the result is known as concurrency in computer science.

Transaction
A transaction is a collection of operations that performs a single logical function in a database application. Multiple transactions execute at the same time can cause the concurrency issue.

Transaction Properties

A database management system (DBMS) is considered a relational database management system (RDBMS) if it follows the transactional properties, ACID, a set of properties of database transactions intended to guarantee data validity..

  • A: Atomicity
    • Transactions are often composed of multiple statements. Atomicity guarantees that each transaction is treated as a single "unit", which either succeeds completely or fails completely:
  • C: Consistency
    • Consistency ensures that a transaction can only bring the database from one consistent state to another, preserving database invariants: any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof.
  • I: Isolation
    • Transactions are often executed concurrently (e.g., multiple transactions reading and writing to a table at the same time). Isolation ensures that concurrent execution of transactions leaves the database in the same state that would have been obtained if the transactions were executed sequentially.
  • D: Durability
    • Durability guarantees that once a transaction has been committed, it will remain committed even in the case of a system failure (e.g., power outage or crash)

Concurrency Problem Types
The concurrency problem mostly arises when both the users try to write the same data, or when one is writing and the other is reading. Apart from this logic, there are some common types of concurrency problems:

  • Dirty Reads
    • This problem occurs when another process reads the changed, but uncommitted data. For instance, if one process has changed data but not committed it yet, another process is able to read the same data. This leads to the inconsistent state for the reader.
  • Lost Updates
    • This problem occurs when two processes try to manipulate the same data simultaneously. This problem can lead to data loss, or the second process might overwrite the first processs change.
  • Non-repeatable Reads
    • This problem occurs when one process is reading the data, and another process is writing the data. In non-repeatable reads, the first process reading the value might get two different values, as the changed data is read a second time because the second process changes the data.
  • Phantom Reads
    • If two same queries executed by two users show different output, then it would be a Phantom Read problem. For instance, If user A select a query to read some data, at the same time the user B insert some new data but the user A only get able to read the old data at the first attempt, but when user A re-query the same statement then he/she gets a different set of data.

Solve Concurrency Problems
SQL Server provides 5 different levels of transaction isolation to overcome these Concurrency problems. These 5 isolation levels work on two major concurrency models:

  • Pessimistic model -
    • Readers can block writers, and
    • Writers can block readers.
  • Optimistic model -
    • Readers cannot block writers, and
    • Writers cannot block readers, but
    • Writer can block another writer.

Note that readers are users are performing the SELECT operations. Writers are users are performing INSERT, ALTER, UPDATE, S.E.T. operations.

5 Isolation Level

In a SQL server, a query can have five different isolation levels:

  • Read Uncommitted --- pessimistic
    • One transaction is allowed to read the data that is about to be changed by the commit of another process.
      • Allows the dirty read problem.
  • Read Committed --- pessimistic ---  This option is the SQL Server default.
    • Only allowed to read data that is committed,
      • Eliminates the dirty read problem.
        • In reading data if there are concurrent transactions that can delete or write data, the read is blocked until other work is complete.
  • Repeatable Read --- pessimistic
    • All Transactions but Insert has to wait till another transaction's update or read query is complete.
      • Eliminates the Non-Repeatable Read problem, but
      • Phantom data exists
  • Serializable --- pessimistic
    • Prevent the Phantom Read problem.
      • The highest level of isolation in the pessimistic model.
      • Can ask any transaction to wait until the current transaction completes.
  • Snapshot  --- optimistic
    • Snapshot follows the optimistic model of concurrency, and this level of isolation takes a snapshot of the current data and uses it as a copy for the different transactions. Here each transaction has its copy of data, so if a user tries to perform a transaction like an update or insert, it asks him to re-verify all the operation before the process gets started executing.

These levels are ordered in terms of the separation of work by two different processes, from minimal separation to maximal.

HostForLIFEASP.NET SQL Server 2022 Hosting




European SQL Server 2022 Hosting :: Understanding SQL Server Temporal Tables

clock June 5, 2024 07:31 by author Peter

A robust feature that was added to SQL Server 2016 are temporal tables, which offer an integrated way to store and retrieve past data. They make it possible for you to monitor all alterations made to the data in a table, which can be very helpful for data analysis, auditing, and compliance. The definition, operation, and practical applications of temporal tables will all be covered in detail in this article.

How do Temporal Tables Work?
System-versioned tables, sometimes referred to as temporal tables, automatically keep track of all data modifications across time. They are made up of two parts:

  • Current Table: Stores the current data.
  • History Table: Automatically stores the historical versions of data.

When a row in the current table is updated or deleted, SQL Server moves the previous version of the row to the history table. This allows you to query historical data at any point in time.

Key Features of Temporal Tables

  • Automated Data Management: Automatically manages the movement of historical data to the history table.
  • Point-in-Time Analysis: Allows querying data as it appeared at any specific point in time.
  • Auditing and Compliance: Provides an audit trail of changes for regulatory compliance.
  • Data Recovery: Enables recovery of data to a previous state without complex restore operations.

Creating Temporal Tables
Creating a temporal table involves specifying system versioning during table creation. Here’s a step-by-step guide.
Define the Current Table: Include period columns for system start and end times.

CREATE TABLE Employees
(
    EmployeeID INT PRIMARY KEY,
    Name NVARCHAR(100),
    Position NVARCHAR(100),
    Salary DECIMAL(18, 2),
    SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START,
    SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END,
    PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime)
)
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.EmployeesHistory));

In this example,
SysStartTime and SysEndTime are the system period columns.
PERIOD FOR SYSTEM_TIME defines the period of system time.

Automatically Manage History Table
SQL Server creates and manages the history table.

Querying Temporal Tables

Temporal tables allow you to query historical data using the FOR SYSTEM_TIME clause.
SELECT * FROM Employees FOR SYSTEM_TIME BETWEEN '2023-01-01T00:00:00' AND '2023-01-01T23:59:59';

Retrieve Current Data
SELECT * FROM Employees;

Retrieve Data at a Specific Point in Time
SELECT * FROM Employees FOR SYSTEM_TIME AS OF '2023-01-01T12:00:00';

Retrieve Data Over a Time Range
SELECT * FROM Employees FOR SYSTEM_TIME BETWEEN '2023-01-01T00:00:00' AND '2023-01-01T23:59:59';

Retrieve All Historical Data
SELECT * FROM Employees FOR SYSTEM_TIME ALL;

Managing Temporal Tables
Turn Off System Versioning: You can turn off system versioning to make schema changes or manage data manually.
ALTER TABLE Employees SET (SYSTEM_VERSIONING = OFF);

Re-enable System Versioning
ALTER TABLE Employees SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.EmployeesHistory));

Cleanup Old Data: To manage the size of the history table, you can periodically archive or clean up old data:
DELETE FROM EmployeesHistory WHERE SysEndTime < '2022-01-01T00:00:00';

Best Practices

  • Indexing: Ensure proper indexing on period columns to optimize query performance.
  • Data Retention Policies: Implement data retention policies to manage the growth of the history table.
  • Security: Secure both current and history tables to prevent unauthorized access to sensitive historical data.

Conclusion
Temporal tables in SQL Server offer a robust solution for managing historical data, providing significant benefits for auditing, compliance, and point-in-time analysis. By automatically capturing and storing historical versions of data, they simplify the process of tracking changes over time. With the ability to query data as it existed at any point in time, temporal tables enhance the capabilities of SQL Server for modern data management needs. Implementing temporal tables involves a straightforward setup, and with best practices in place, they can significantly improve your data management strategy.



European SQL Server 2022 Hosting :: SQL Temporary Tables: Types, Syntax, and Application

clock April 26, 2024 07:30 by author Peter

Temporary tables, often known as temp tables, are widely used by database managers and developers. They function similarly to ordinary tables and are kept in the tempDB database, enabling you to choose, add, and remove data as needed. In the event that they are produced during a stored procedure, they will be removed once the operation is finished.

What does a SQL Server temporary table mean?

In SQL, a temporary table is a database table that is only temporarily present on the database server. For a set amount of time, a temporary table retains some of the data from a regular table.

Temporary tables are useful when you need to regularly work with a small subset of the many records in a table. Sometimes it is not necessary to filter the data several times in order to obtain the subset; instead, you can filter the data only once and save it in a temporary table.

Temporary tables and permanent tables are almost the same. They are created in TempDB and deleted right away after the last connection to the query window that created the table is closed. Temporary Tables can be used to process and store interim findings. Temporary tables are used when it is necessary to store data temporarily.

How to Create a Temporary SQL Table?
CREATE TABLE #tmpEmp
(
  Id INT,
  Name NVARCHAR(50),
  Rank INT
)


Types of Temporary Tables in SQL
There are a couple of temporary tables in SQL.

  • Local Temporary Tables
  • Global Temporary Tables

Local Temporary Tables
TempDB contains local temporary table storage. They are destroyed automatically at the conclusion of the operation or session, and they are only available to the one who created them.

For instance, after a local temporary table called #tmpEmp is created, the user's ability to manipulate the table is limited until the query window's final connection is closed. They can be recognized by the prefix #, such as #table name, and the same temporary table can be created with the same name in many windows.

A local temporary table can be created with the CREATE TABLE command, where the table name is prefixed with a single number sign (#table name).

Syntax
The following is the syntax in SQL Server (Transact-SQL) for making a LOCAL TEMPORARY TABLE.
CREATE TABLE #tablename
(
  column1 datatype [ NULL | NOT NULL ],
    column2 datatype [ NULL | NOT NULL ],
    column3 datatype [ NULL | NOT NULL ],
    ...
    columnn datatype [ NULL | NOT NULL ]
);

Let’s insert some data entry in a temporary table.

INSERT INTO #tmpEmp ([Name], [Rank])
VALUES ('Peter Scott', 196)

Let’s check the results, whether it’s stored or not.
SELECT Id, [Name], [Rank] FROM #tmpEmp

Where do I store the temporary table on the SQL Server?

Another way to create a temporary table in SQL is with the SELECT INTO statement. For the illustration, go through the following things.
SELECT Id, [Name], [Rank]
INTO #tmpEmp1
FROM #tmpEmp

Temporary tables are useful when you need to regularly work with a small subset of the many records in a table. Sometimes it is not necessary to filter the data several times in order to obtain the subset; instead, you can filter the data only once and save it in a temporary table.

Temporary tables and permanent tables are almost the same. They are created in TempDB and deleted right away after the last connection to the query window that created the table is closed. Temporary Tables can be used to process and store interim findings. Temporary tables are used when it is necessary to store data temporarily.

Global Temporary Table in SQL Server
Additionally, they are kept in tempDB. These tables belong to the category of transient tables that are concurrently accessible to all users and sessions. When the final session using the temporary table concludes, they are automatically removed. These tables do not exist in the system catalogs and are not persistent.

A global temporary table is created with the CREATE TABLE command, and the table name is preceded by a double number sign (##table name).

Syntax
The following is the syntax in SQL Server (Transact-SQL) for making a global temporary table.
CREATE TABLE ##tablename
(
  column1 datatype [ NULL | NOT NULL ],
    column2 datatype [ NULL | NOT NULL ],
    column3 datatype [ NULL | NOT NULL ],
    ...
    columnn datatype [ NULL | NOT NULL ]
);


Create a global temporary table.

CREATE TABLE ##tmpEmployee
(
  Id INT NOT NULL IDENTITY (1, 1),
  [Name] NVARCHAR(50),
  [Rank] INT
)

Let’s insert some data entry in a global temporary table.

INSERT INTO ##tmpEmployee ([Name], [Rank])
VALUES ('Peter', 196)
, ('Daniel', 1211)
, ('Maria', 1250)
, ('Laura', 1280)

Let’s check the results, whether it’s stored or not.
SELECT Id, [Name], [Rank] FROM ##tmpEmployee


Where do I store the temporary table on the SQL Server?


Delete Temporary and Global Temporary SQL Table
When possible, we should directly remove temporary tables rather than waiting for them to be deleted automatically when the connection is closed. in order to expeditiously release the temp resources.

Syntax
DROP TABLE TableName

As an illustration,
DROP TABLE #tmpEmp, #tmpEmp1, ##tmpEmployee

Let's check in the Temporary Tables from SQL Server.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: How to check if SQL table is temporal?

clock October 30, 2023 10:49 by author Peter

There are two methods for determining whether a SQL table is temporal: Examine the table definition. There are two system-versioned period entries in temporal tables: sys_start_time and sys_end_time. These columns include the beginning and end dates of the time period for which the row is valid. A table with these two columns is a temporal table.

The OBJECTPROPERTY() function should be used. The OBJECTPROPERTY() function returns information about a database object, such as whether it is a temporal table. To determine whether a table is temporal, use the SQL statement:

SELECT OBJECTPROPERTY(object_id('table_name'), 'IsTemporalTable');

If the query returns a value of 1, the table is temporal. Otherwise, the table is not chronological. For instance, the following SQL statement will determine whether the Products table is temporal:

SELECT OBJECTPROPERTY(object_id('Products'), 'IsTemporalTable');

If the result of the query is 1, then the Products table is temporal. Otherwise, the Products table is not temporal.

Note: Temporal tables are only supported in Microsoft SQL Server 2016 and later versions.

Here are some additional things to keep in mind about temporal tables:

  • Temporal tables can be either system-versioned or history-table versioned. System-versioned temporal tables are managed by the database engine, while history-table versioned temporal tables are managed by the user.
  • Temporal tables can be used to store both current and historical data. Current data is stored in the table, while historical data is stored in a history table.
  • Temporal tables can be used to query both current and historical data. The FOR SYSTEM_TIME clause can be used to specify the time period for which the data should be returned.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2019 Hosting :: Cumulative Credit/Debit Transaction In SQL Server

clock November 22, 2022 09:52 by author Peter

In this blog, we learn how to calculate credit debit Transactions like banking report using SQL Sever.

Step 1:  Create Table valued function for calculate credit, debit amount with total balance
USE [SqlBank]

CREATE   FUNCTION [dbo].[FNGetTransaction](@CID BIGINT)
RETURNS @Tab_TRansaction TABLE (id BIGINT ,CreditAmt MONEY,DebitAmt MONEY,Tr_Type VARCHAR(250),
TranDate DATETIME ,AC_ID BIGINT ,Balance DECIMAL(18,2),TType VARCHAR(200)
, CustName varchar(max),AC_NO bigint,Address varchar(max),Mobile varchar(max),
Email varchar(max),AC_OpDate datetime,Remarks varchar(max) , IsmailSend int)
AS
BEGIN

DECLARE @TempAC_ID BIGINT;

SET @TempAC_ID = (SELECT TOP 1 A.AC_ID FROM Tbl_Account A join Tbl_Cust
  C ON A.CID=C.CID WHERE c.CID=@CID)

DECLARE @Tbl_Tran Table
(id BIGINT,
CreditAmt MONEY,DebitAmt MONEY,Tr_Type VARCHAR(250),
TranDate DATETIME ,AC_ID BIGINT ,Balance DECIMAL(18,2),TType VARCHAR(200),
 CustName varchar(max),AC_NO bigint,Address varchar(max),Mobile varchar(max),
Email varchar(max),AC_OpDate datetime  ,
Remarks varchar(max)  , IsmailSend int
)

INSERT INTO @Tbl_Tran(id,CreditAmt,DebitAmt,Tr_Type,TranDate,AC_ID,Balance,TType,CustName ,AC_NO ,Address
 ,Mobile,Email ,AC_OpDate,Remarks,IsmailSend)
SELECT TR.TR_ID, CASE WHEN tr.TR_CrDrType ='Cr' THEN tr.TR_Amt ELSE 0 END CreditAmt,
CASE WHEN tr.TR_CrDrType ='Dr' THEN tr.TR_Amt ELSE 0 END DebitAmt ,Tr.TR_Type,tr.TR_Date,Tr.AC_ID ,
 CASE WHEN tr.TR_CrDrType ='Cr' THEN tr.TR_Amt ELSE 0 END - CASE WHEN tr.TR_CrDrType ='Dr'
 THEN tr.TR_Amt ELSE 0 END  Balance,
 Tr.TR_CrDrType  ,C.CName ,Acc.AC_NO ,C.CAddress ,C.CMObile,C.CEmail ,Acc.AC_OpDate ,
 Tr.Remarks , Tr.IsmailSend

FROM Tbl_Transaction Tr with(nolock) join Tbl_Account Acc with(nolock) ON acc.AC_ID=Tr.AC_ID
      join Tbl_Cust C with(nolock) ON C.CID=Acc.CID
WHERE Acc.CID=@CID;

WITH Tbl_CTE_Tran
as
(
SELECT T2.id,T2.CreditAmt,T2.DebitAmt,SUM(T1.CreditAmt-T1.DebitAmt) Balance,
T2.Tr_Type,T2.TranDate,T2.AC_ID
,T2.TType,T2.CustName ,T2.AC_NO ,T2.Address
 ,T2.Mobile,T2.Email ,T2.AC_OpDate,t2.Remarks,t2.IsmailSend FROM @Tbl_Tran T1
join @Tbl_Tran T2 on T1.id<=T2.id WHERE T2.AC_ID=@TempAC_ID
GROUP BY T2.id,T2.CreditAmt,T2.DebitAmt,T2.Tr_Type,T2.TranDate,T2.AC_ID,T2.TType,
T2.CustName ,T2.AC_NO ,T2.Address
 ,T2.Mobile,T2.Email ,T2.AC_OpDate  ,t2.Remarks ,t2.IsmailSend
)

INSERT INTO @Tab_TRansaction (id,CreditAmt,DebitAmt,Tr_Type,TranDate,AC_ID,Balance,TType,CustName ,AC_NO ,Address
 ,Mobile,Email ,AC_OpDate ,Remarks ,IsmailSend
 )
SELECT id,CreditAmt,DebitAmt,Tr_Type,TranDate,AC_ID,Balance,TType  ,CustName ,AC_NO ,Address
 ,Mobile,Email ,AC_OpDate ,Remarks,IsmailSend
FROM Tbl_CTE_Tran  with(nolock)
WHERE AC_ID=@TempAC_ID

RETURN
END


Step 2: Create Procedure & Call above function in Procedure

USE [SqlBank]

CREATE PROC [dbo].[PROC_TRansaction]
(
@TR_ID int=null output,
@CID bigint=null,
@TR_Amt decimal(18,2)=null,
@AC_ID bigint =null,
@Flag varchar(100)=null,
@AC_No bigint=null,
@Remarks varchar(max)=null,
@MTR_ID int=null output,
@Balance decimal(18,2)=null output
)
AS
BEGIN
DECLARE @TempTRAmount decimal(18,2)
DECLARE @Temp_ACID bigint
DECLARE @Tran_ScopID bigint;
DECLARE @Tran_ID bigint;
DECLARE @MMTR_ID bigint;

BEGIN TRAN Tbl_Transaction_Tran
   BEGIN  TRY
      IF(@Flag = 'Tran')
      BEGIN

IF EXISTS(SELECT 1 FROM Tbl_Transaction Tr with(nolock) join Tbl_Account Acc
with(nolock) ON acc.AC_ID=Tr.AC_ID WHERE Acc.CID=@CID)
BEGIN
 SELECT  a.id id ,a.DebitAmt,a.CreditAmt,a.Balance
 ,a.Tr_Type, isnull(Format(a.TranDate,'dd-MMM-yyyy HH:mm'),'') TranDate, NCHAR(8377) Rupees ,a.TType,a.Remarks
   FROM dbo.FNGetTransaction(@CID) a
--      JOIN dbo.FNGetTransaction(@CID) b ON b.id<=a.id
--GROUP BY a.id,a.DebitAmt,a.CreditAmt,a.Tr_Type,a.TranDate,a.AC_ID,a.TType
END
ELSE
BEGIN
Select 'No Transaction summary found...?' OpMsg
END
    END
ELSE IF(@Flag = 'IN')
    BEGIN
    SET @Temp_ACID = (SELECT Top 1 A.AC_ID  FROM Tbl_Account A with(nolock)
     Join Tbl_Cust C with(nolock) ON A.CID=C.CID WHERE A.AC_No=@AC_No)
    DECLARE @SenderName varchar(max)
        SET @SenderName = (SELECT Top 1 c.CName  FROM Tbl_Account A with(nolock)
        Join Tbl_Cust C with(nolock) ON A.CID=C.CID WHERE c.CID=@CID)
    DECLARE @ReciverName varchar(max)
       SET @ReciverName = (SELECT Top 1 c.CName FROM Tbl_Account A with(nolock)
        Join Tbl_Cust C with(nolock) ON A.CID=C.CID
       WHERE A.AC_No=@AC_No)
SET @TempTRAmount = (
 SELECT TOP 1 ISNULL(SUM(b.balance),0) Balance
   FROM dbo.FNGetTransaction(@CID) a
JOIN dbo.FNGetTransaction(@CID) b ON b.id<=a.id
GROUP BY a.id,a.DebitAmt,a.CreditAmt,a.Tr_Type,a.TranDate,a.AC_ID,a.TType ORDER BY a.id desc)
if(@TR_Amt > @TempTRAmount)
BEGIN
Select 'Insuffitient Balance' as msg
END
ELSE
  BEGIN
  Declare @FixScratchAmt decimal(18,2)=500;
  --if not exists (select 1 from Tbl_Transaction Where TR_Date=CURRENT_TIMESTAMP and Ref_TranACC=@AC_ID)
  --begin
  Insert INTO Tbl_Transaction (TR_Type,TR_Amt,TR_Date,AC_ID,TR_CrDrType,Ref_TranACC,isdelete,IsTranType,IsMailSend,Remarks)
  Values                ('Online - Transfer To - '+ @ReciverName + ' '+Cast(@Ac_NO as varchar(max))+' ',
  ISNULL(@TR_Amt,0),CURRENT_TIMESTAMP,@AC_ID,'Dr','Tran-' +CAST(@AC_ID as varchar(max)),0,'S',0,@Remarks)
  set @Tran_ID = @@IDENTITY;
  set @TR_ID= @Tran_ID;
  set @Tran_ScopID= SCOPE_IDENTITY();
  Set @Balance = (SELECT TOP 1 BALANCE FROM dbo.FNGetTransaction(@CID) order by id desc)
  if(@TR_Amt >= @FixScratchAmt)
  begin
   Insert INTO Tbl_Transaction (TR_Type,TR_Amt,TR_Date,AC_ID,TR_CrDrType,Ref_TranACC,isdelete,IsTranType,IsMailSend,Remarks)
  Values                ('Cash Back From S Bank7 ',10,CURRENT_TIMESTAMP,@AC_ID,'Cr',0,1,'R',0,'Cash back from Sbank7. Pay & win more cash back ')
  END

Insert INTO Tbl_Transaction (TR_Type,TR_Amt,TR_Date,AC_ID,TR_CrDrType,Ref_TranACC,isdelete,IsTranType,IsMailSend,Remarks)
  Values                ('Recived From ' + @SenderName + ' Tran - '+Cast(@Tran_ScopID as varchar(max))+'-'+
  CAST(@AC_ID as varchar(max)),ISNULL(@TR_Amt,0),CURRENT_TIMESTAMP,@Temp_ACID,'Cr','Tran-'
  +Cast(@Tran_ScopID as varchar(max))+'-'+ CAST(@AC_ID as varchar(max)),0,'R',0,@Remarks)
  set @MMTR_ID = @@IDENTITY;
  set @MTR_ID = @MMTR_ID;
    END
    END
IF(@@TRANCOUNT > 0)
  BEGIN
  COmmit tran Tbl_Transaction_Tran
  END
END TRY
BEGIN CATCH
IF(@@TRANCOUNT > 0)
            BEGIN
            ROLLBACK TRAN Tbl_Transaction_Tran
            END
            DECLARE @USERID varchar(max),@ERRORLINE varchar(max)
            ,@ERRORMESSAGE varchar(max),@ERRORPROCEDURE varchar(500),@ERRORSEVERITY varchar(max)
            ,@ERRORSTATE varchar(max), @ErroFrm varchar(max)

            SELECT @USERID = SUSER_SNAME(),@ERRORLINE=ERROR_LINE(),@ERRORMESSAGE=ERROR_MESSAGE(),
                   @ERRORPROCEDURE=ERROR_PROCEDURE(),@ERRORSEVERITY=ERROR_SEVERITY(),
                  @ERRORSTATE= ERROR_STATE() ,@ErroFrm = 'Backend'

        EXEC Proc_ERRORLOG @USERID,@ERRORLINE,@ERRORMESSAGE,@ERRORPROCEDURE,@ERRORSEVERITY,@ERRORSTATE,0,@ErroFrm
END CATCH
END


Step 3: Execute Procedure to Check Report
exec [dbo].[PROC_TRansaction]

@CID =2,@Flag='Tran'


I hope it works!

HostForLIFEASP.NET SQL Server 2019 Hosting



SQL Server Hosting - HostForLIFE.eu :: Repair SQL Database From Suspect Mode

clock June 14, 2016 20:11 by author Anthony

Sometimes we have to face a critical situation when SQL Server database going to Suspect Mode. In that moment no work can be done on database. Database may go into suspect mode because the primary file group is damaged and the database cannot be recovered during the startup of the SQL Server
Reason for database to go into suspect mode:

Free ASP.NET Hosting - Europe

  • Data files or log files are corrupt.
  • Database server was shut down improperly
  • Lack of Disk Space
  • SQL cannot complete a rollback or roll forward operation

    
How to recover database from suspect mode:

  • Change the status of your database. Suppose database name is “BluechipDB”?

EXEC sp_resetstatus '';
Example:
EXEC sp_resetstatus 'BlueChipDB'

  • Set the database in “Emergency” mode

ALTER DATABASE  SET EMERGENCY;
Example:
ALTER DATABASE BlueChipDB SET EMERGENCY

  • Check the database for any inconsistency

DBCC CHECKDB('');

Example:
DBCC checkdb('BlueChipDB')

If you get any error after executing DBCC CHECKDB then immediately bring the database in SINGLE USER MODE by running following query. If no error found then you need not execute the following query.

ALTER DATABASE  SET SINGLE_USER WITH ROLLBACK IMMEDIATE

Example:
ALTER DATABASE BlueChipDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE

  • For safety, take the backup of the database.
  • Run the following query as next step.Remember while using the below query, that uses REPAIR_ALLOW_DATA_LOSS, is a one way operation that is once the database is repaired all the actions performed by these queries can’t be undone.
  • There is no way to go back to the previous state of the database.
  • So as a precautionary step you should take backup of your database in step 5 mentioned above.

DBCC CHECKDB ('', REPAIR_ALLOW_DATA_LOSS);

Example:
DBCC CheckDB ('BlueChipDB', REPAIR_ALLOW_DATA_LOSS)

  • Finally, bring the database in MULTI USER mode

ALTER DATABASE  SET MULTI_USER;
ALTER DATABASE [BlueChipDB]  SET MULTI_USER

  • Refresh your database server and verify the connectivity of your database. Now users should be able to connect to the database properly. If any data loss, you can restore database – backup taken in step 5.

 

HostForLIFE.eu SQL 2016 Hosting
European best, cheap and reliable ASP.NET hosting with instant activation. HostForLIFE.eu is #1 Recommended Windows and ASP.NET hosting in European Continent. With 99.99% Uptime Guaranteed of Relibility, Stability and Performace. HostForLIFE.eu security team is constantly monitoring the entire network for unusual behaviour. We deliver hosting solution including Shared hosting, Cloud hosting, Reseller hosting, Dedicated Servers, and IT as Service for companies of all size.



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in