European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

European SQL Server 2022 Hosting :: SQL Server CLR Integration and SSIS Automation with C#

clock April 25, 2025 09:45 by author Peter

Developers frequently encounter the difficulty of fusing intricate database operations and data transportation pipelines with application logic in contemporary enterprise systems. This gap can be successfully closed with the aid of two potent features from the Microsoft SQL Server ecosystem.

  • SQL Server CLR Integration: Leverage .NET capabilities within SQL Server for advanced procedural logic.
  • SSIS Automation in C#: Programmatically control and automate ETL pipelines using SQL Server Integration Services (SSIS).

This article explores both concepts in depth, providing code examples, use cases, and best practices.

SQL CLR Functions in .NET: Embedding Business Logic in SQL Server

SQL CLR (Common Language Runtime) integration allows developers to create stored procedures, functions, aggregates, and triggers using any .NET language (like C#). This is particularly useful when T-SQL falls short for tasks requiring procedural logic, complex math, string operations, or external library support.

Example: A Simple CLR Scalar Function in C#
[SqlFunction]
public static int AddNumbers(int a, int b)
{
    return a + b;
}

After compiling this function into a DLL and registering it with SQL Server, it can be invoked just like a built-in T-SQL function.
SELECT dbo.AddNumbers(100, 250); -- Returns 350

Step-by-Step Deployment Process
Enable CLR in SQL Server

sp_configure 'clr enabled', 1;
RECONFIGURE;


Compile the C# code into a Class Library (DLL)
Use Visual Studio to create a Class Library project.
Set the project to target .NET Framework, not .NET Core.

Deploy the Assembly to SQL Server.
CREATE ASSEMBLY MyClrAssembly
FROM 'C:\Path\To\MyClrAssembly.dll'
WITH PERMISSION_SET = SAFE;


Create the Function.
CREATE FUNCTION dbo.AddNumbers(@a INT, @b INT)
RETURNS INT
AS EXTERNAL NAME MyClrAssembly.[YourNamespace.YourClass].AddNumbers;

When to Use SQL CLR Functions?

Use Case Why Use CLR
Complex mathematical operations .NET has richer math libraries
String and regex manipulation .NET handles regex far better than T-SQL
File system or external access Use with EXTERNAL_ACCESS permission
Code reusability Centralize shared logic across apps & DB

Note. Use CLR sparingly for security and performance. Avoid overusing it for tasks that T-SQL handles well.

Automating ETL with SSIS from C#: Taking Control of Data Pipelines
SQL Server Integration Services (SSIS) is a widely used tool for ETL (Extract, Transform, Load) processes. While it’s typically run via SQL Agent jobs or the SSIS catalog, sometimes you need tighter control — dynamic execution, real-time monitoring, or conditional branching based on application logic.

Example: Running a Package from C#

using Microsoft.SqlServer.Dts.Runtime;

Application app = new Application();
Package package = app.LoadPackage(@"C:\Packages\MyPackage.dtsx", null);
DTSExecResult result = package.Execute();

if (result == DTSExecResult.Success)
{
    Console.WriteLine("Package executed successfully.");
}
else
{
    Console.WriteLine("Package execution failed.");
}

What You Can Automate with This?

  • Trigger SSIS packages based on real-time events (like user actions, webhooks, or workflows).
  • Dynamically select packages, connections, or parameters based on app logic.
  • Integrate with logging and monitoring systems for auditing ETL runs.
  • Schedule or queue package runs without using SQL Agent.

Requirements & Tips

Requirement Details
SSIS Runtime Ensure Microsoft.SqlServer.ManagedDTS is referenced.
Permissions App/service needs rights to run SSIS and access packages.
DTSX Package Availability Ensure the package path is correct and accessible.
SQL Server Data Tools (SSDT) For creating and debugging SSIS packages.

You can also manipulate variables, log events, and receive task-level execution results via the SSIS object model in C#.

Combining CLR + SSIS for End-to-End Automation
By using both CLR integration and SSIS automation in your application stack, you unlock powerful data and logic orchestration capabilities.

Practical Scenario
Imagine a financial reporting system.

  • You use SQL CLR functions to calculate custom interest models in queries.
  • You automate SSIS to pull raw transaction data nightly and load into your analytics warehouse.
  • Your C# application coordinates both — triggering ETL, monitoring outcomes, and presenting results in dashboards.

Security and Best Practices

  • Avoid UNSAFE permissions unless absolutely necessary for SQL CLR.
  • Use strong-named assemblies for CLR to prevent version conflicts and security risks.
  • Secure your package execution by using Windows authentication or proxy credentials in SSIS.
  • Isolate configuration: Read SSIS parameters from external configuration files or variables, not hardcoded paths.

Summary: Why This Matters

Feature Benefits
SQL CLR Integration Reuse .NET logic, enhance SQL performance, simplify complex operations
SSIS Automation in C# Real-time control over ETL, seamless integration with business logic

These technologies help you create agile, intelligent, and integrated data systems — essential in today’s data-driven applications.

Final Thoughts
SQL Server isn't just a database — it’s a platform for building smart, automated systems that react and scale with your application. Using CLR integration and SSIS automation, developers can tightly couple database processing with business workflows, reduce manual effort, and deliver greater value through code.

Ready to modernize your data workflows? Combine your C# skills with the power of SQL Server for next-level automation.

Full Class Example

using System;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using Microsoft.SqlServer.Dts.Runtime;

namespace SqlServerIntegration
{
    public class SqlServerIntegrationHelper
    {
        /// <summary>
        /// SQL CLR function to add two numbers.
        /// Can be registered in SQL Server as a UDF.
        /// </summary>
        [SqlFunction]
        public static SqlInt32 AddNumbers(SqlInt32 a, SqlInt32 b)
        {
            return a + b;
        }

        /// <summary>
        /// Executes an SSIS package from a given .dtsx file path.
        /// Returns true if successful, false otherwise.
        /// </summary>
        /// <param name="packagePath">Full path to the .dtsx package file</param>
        /// <returns>True if successful, false if failed</returns>
        public static bool ExecuteSSISPackage(string packagePath)
        {
            try
            {
                Application app = new Application();
                Package package = app.LoadPackage(packagePath, null);

                DTSExecResult result = package.Execute();

                if (result == DTSExecResult.Success)
                {
                    Console.WriteLine("✅ SSIS Package executed successfully.");
                    return true;
                }
                else
                {
                    Console.WriteLine("❌ SSIS Package execution failed.");
                    return false;
                }
            }
            catch (Exception ex)
            {
                Console.WriteLine($"⚠️ Error executing SSIS Package: {ex.Message}");
                return false;
            }
        }

        // Optional: Main method for standalone testing (Console App only)
        public static void Main()
        {
            Console.WriteLine("Running SSIS Package...");
            string path = @"C:\Packages\MyPackage.dtsx"; // Change this to your actual path
            ExecuteSSISPackage(path);
        }
    }
}

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: Schedule SSIS Jobs Step by Step with Screenshots

clock April 23, 2025 10:27 by author Peter

Follow the below steps.
Step 1. First, deploy the SSIS Package under Integration Services Catalogs.
Step 2. Expand SQL Server Agent option -> Jobs -> Create New Job
Step 3.  Once click on New Job below window will open:

 

Step 4. In the General tab - Enter your Job Name.
Step 5. In the Step tab - Click on the New button.

 

Step 6. Once you click on the New button -> below window will open.

  • Add Step name
  • Select Type as SQL Server Integration Package

In Package Option:

  • Select Server
  • Select the Package that you want to Schedule.

Now Select Configuration Option:
Check Parameters


Check Connection managers - Sometimes, you need to add a password if it's not autofill.
In the Advanced option - Select 32-bit runtime.


Then click OK. Now your step is created.

Step 7. Now click on the left Schedules option -> Click New Button -> Below window will open:


HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: In SQL Server Databases, Dynamically Create Foreign Keys

clock April 14, 2025 10:30 by author Peter

Suppose you have just defined the primary keys in your database, but later on you want to use the foreign keys as well. In that scenario, defining the foreign keys in each table using the main key for the entire database is extremely challenging. This may be accomplished dynamically by writing a straightforward script that can read every table in the database, look for a field, and then, if the field is found in the database table, establish a foreign key. Attached is the script for the same.

This is the script for creating the Foreign Keys for all dependent tables:

Create a temp table to hold all user tables
IF OBJECT_ID('tempdb..#AllTables') IS NOT NULL DROP TABLE #AllTables;

-- Select all user-defined tables into a temporary table
SELECT name AS TableName
INTO #AllTables
FROM sys.tables
WHERE is_ms_shipped = 0;

-- Declare variables and cursor
DECLARE @TableName NVARCHAR(255);
DECLARE @SQL NVARCHAR(MAX);

DECLARE TableCursor CURSOR FOR
SELECT TableName FROM #AllTables;

-- Open cursor and iterate through each table
OPEN TableCursor;
FETCH NEXT FROM TableCursor INTO @TableName;

WHILE @@FETCH_STATUS = 0
BEGIN
    -- Check if 'CompanyID' column exists and no foreign key is defined (excluding 'CompanyMaster')
    IF EXISTS (
        SELECT 1
        FROM sys.columns
        WHERE object_id = OBJECT_ID(@TableName)
        AND name = 'CompanyID'
    ) AND NOT EXISTS (
        SELECT 1
        FROM sys.foreign_key_columns fkc
        JOIN sys.columns c
            ON fkc.parent_column_id = c.column_id
           AND fkc.parent_object_id = c.object_id
        WHERE c.name = 'CompanyID'
        AND fkc.parent_object_id = OBJECT_ID(@TableName)
        AND @TableName <> 'CompanyMaster'
    )
    BEGIN
        -- Build and execute SQL to add a foreign key constraint
        SET @SQL = '
        ALTER TABLE [' + @TableName + ']
        ADD CONSTRAINT FK_' + @TableName + '_CompanyID
        FOREIGN KEY (CompanyID) REFERENCES Company(CompanyID);';

        EXEC sp_executesql @SQL;
    END

    FETCH NEXT FROM TableCursor INTO @TableName;
END

-- Clean up
CLOSE TableCursor;
DEALLOCATE TableCursor;
DROP TABLE #AllTables;


After running this script, the Foreign Keys are created. To check this.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: Understanding Conversion Functions in SQL

clock April 8, 2025 10:52 by author Peter

Conversion functions in SQL are used to change the data type of a value. These functions are essential when handling different data formats and ensuring consistency in data processing.

Types of Conversion Functions

  • CAST(): Converts an expression from one data type to another.
  • CONVERT(): Similar to CAST but allows formatting for date and numeric conversions.
  • TRY_CAST(): Similar to CAST but returns NULL instead of an error if conversion fails.
  • TRY_CONVERT(): Functions like CONVERT but returns NULL if conversion fails.
  • FORMAT(): Converts values into a formatted string.

Example Usage of Conversion Functions

1. Using CAST() Function
SELECT CAST(123.45 AS INT) AS ConvertedValue;

2. Using CONVERT() Function
SELECT CONVERT(VARCHAR, GETDATE(), 103) AS FormattedDate;

3. Using TRY_CAST() Function
SELECT TRY_CAST('123ABC' AS INT) AS Result;

Output. NULL (Fails due to non-numeric characters)


4. Using TRY_CONVERT() Function

SELECT TRY_CONVERT(INT, '456XYZ') AS Result;

Output. NULL (Fails due to non-numeric characters)

5. Using FORMAT() Function
SELECT FORMAT(1234567.89, 'N2') AS FormattedNumber;


Output. 1,234,567.89

Advantages of Conversion Functions

  • Helps standardize data representation across databases.
  • Allows formatting of numeric and date values.
  • Prevents errors when handling mixed data types.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: Comprehending SQL Aggregate Functions

clock March 24, 2025 08:58 by author Peter

Multiple rows of data can be calculated using SQL's aggregate functions, which yield a single result.

Common Aggregate Functions

    COUNT(): Returns the number of rows.
    SUM(): Returns the total sum of a numeric column.
    AVG(): Returns the average value of a numeric column.
    MIN(): Returns the minimum value.
    MAX(): Returns the maximum value.

Example Usage of Aggregate Functions

1. Using COUNT()

SELECT COUNT(*) FROM Employees;

2. Using SUM()

SELECT SUM(Salary) FROM Employees;

3. Using AVG()
SELECT AVG(Salary) FROM Employees;

4. Using MIN() and MAX()
SELECT MIN(Salary) FROM Employees;

SELECT MAX(Salary) FROM Employees;


Output
    MIN(Salary): 50,000 (Lowest salary)
    MAX(Salary): 200,000 (Highest salary)

Using GROUP BY with Aggregate Functions
GROUP BY is often used with aggregate functions to group results by one or more columns.
SELECT
 Department, AVG(Salary)
FROM Employees
GROUP BY Department;

Output

Department AVG(Salary)
IT 120,000
HR 80,000
Finance 110,000

Using HAVING with Aggregate Functions

HAVING is used to filter results after aggregation.

SELECT
 Department, COUNT(*)
FROM Employees
GROUP BY Department
 HAVING COUNT(*) > 10;

Output

Department COUNT(*)
IT 15
Finance 12

Advanced Use of Aggregate Functions
Aggregate functions in SQL can be used in advanced ways to solve complex data analysis problems efficiently.

1. Using Aggregate Functions with CASE
SELECT Department,
       SUM(   CASE
                  WHEN Gender = 'Male' THEN
                      1
                  ELSE
                      0
              END
          ) AS Male_Count,
       SUM(   CASE
                  WHEN Gender = 'Female' THEN
                      1
                  ELSE
                      0
              END
          ) AS Female_Count
FROM Employees
GROUP BY Department;

Output

Department Male_Count Female_Count
IT 10 5
HR 3 8

2. Using Aggregate Functions with DISTINCT
SELECT COUNT(DISTINCT Department) AS Total_Departments FROM Employees;

Output. 5 (Total distinct departments)

3. Using Aggregate Functions with PARTITION BY
PARTITION BY allows applying aggregate functions without collapsing rows.
SELECT EmployeeID,
       Name,
       Department,
       Salary,
       AVG(Salary) OVER (PARTITION BY Department) AS Avg_Department_Salary
FROM Employees;

Output

EmployeeID Name Department Salary Avg_Department_Salary
1 John IT 120000 110000
2 Sarah IT 100000 110000


4. Using Aggregate Functions with HAVING for Filtering

SELECT Department,
   COUNT(*) AS Employee_Count
FROM Employees
GROUP BY Department
HAVING COUNT(*) > 10;

Output. Departments have more than 10 employees.

Advantages of Advanced Aggregate Functions

  • Allows detailed data analysis with conditions.
  • Enhances reporting and business intelligence capabilities.
  • Reduces query complexity using built-in SQL functions.
  • Helps in summarizing data.
  • Improves query efficiency by reducing result set size.
  • Facilitates data analysis and reporting.

These advanced aggregate functions help in efficient query design and deeper data insights.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting - HostForLIFE :: Explaining Aggregate Functions in SQL

clock March 19, 2025 08:23 by author Peter

Obtaining the sum of a set of numbers, like the total salary, is frequently required when using SQL searching. For this procedure, SQL has developed special functions called Aggregate Functions or Grouping Functions. When working with numerical and statistical data, aggregate functions are employed for grouping, which enables us to get results following a sequence of simple or complex mathematical computations.

Aggregate functions are predefined functions that carry out the required activities and produce the results when set up during a database query in accordance with our requirements.

Aggregate functions, such as Sum and Count, are characterized by their ability to return a single number after performing particular calculations on the data in a column.

Next, we will examine these functions.

Min Function

This function is used to obtain the minimum value from similar values.
SELECT MIN(grade)
FROM StudentGrade;


In the example above, we use this function to find the lowest score of students in the student grades table.

Max Function

This function is exactly the opposite of the Min function; it is used to find the maximum value among similar values.
SELECT MAX(salary)
FROM Personnel
WHERE Age < 35;


In this example, it finds and displays the highest salary among personnel who are under 35 years old.

Sum Function
This function is used to obtain the sum of numbers.
SELECT SUM(Salary)
FROM Personnel;


In this example, this function is used to sum all the salaries of the personnel in the personnel table.

Count Function

As the name of this function indicates, it is used to obtain the number of items.
SELECT COUNT(Id)
FROM Personnel;


The Count function is used to find the number of personnel.

Avg Function

The AVG function is actually an abbreviation for “average.” Using the AVG function, we can calculate and display the average of the desired values from grouped columns.
SELECT AVG(Salary)
FROM Personnel;


In this example, the AVG function is used to calculate the average salary of the personnel.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting - HostForLIFE :: Removing Pointless Delete Operations

clock March 7, 2025 06:05 by author Peter

Easy fix bottlenecks that may be resolved with the appropriate tuning techniques are frequently the cause of SQL Server performance problems. The DELETE statement will be the main topic of this little blog.

Even in simple recovery mode, DELETE statements have the drawback of consuming transaction log space and requiring unnecessary logical reads. Whereas TRUNCATE eliminates every row of a table or partition at the storage for a far quicker and more effective operation, DELETE is a row-based action that produces a lot of logical reads. Although both DELETE and TRUNCATE eliminate data from a table, their behavior varies with regard to rollback capabilities, recovery, logging, and performance.

Performance-wise, the DELETE statement is problematic because it necessitates locking and logging. Because each delete is recorded separately in the transaction log, ROLLBACK is possible. However, when using a WHERE clause to remove specific data within a statement, deleting data with foreign key restrictions, triggering triggers, or conducting rollbacks (outside of a transaction), DELETEs are required. It will be far more effective to use a DROP or TRUNCATE to remove every row if none of these conditions are met. Instead of performing a single action, TRUNCATE dealslocates all of the table's pages, which improves efficiency.

If you can eliminate the need to DELETE the data by using TRUNCATE or DROP instead you can get an immediate performance boost for the query, stored procedure or function. Let’s take a look at a very simple example.

Example
CREATE TABLE ExampleTable (
    ID INT IDENTITY(1,1) PRIMARY KEY,
    Item VARCHAR(100)
);

INSERT INTO ExampleTable (Item)

SELECT TOP 1000000 'SampleItem'

FROM [Production].[TransactionHistoryArchive];  -- Using table to generate rows

SET STATISTICS TIME ON;

DELETE FROM ExampleTable;

SET STATISTICS TIME OFF;

SET STATISTICS TIME ON;

TRUNCATE TABLE ExampleTable;

SET STATISTICS TIME OFF;


RESULTS
DELETE

(89253 rows affected)
SQL Server Execution Times:
CPU time = 328 ms,  elapsed time = 1851 ms.
(89253 rows affected)


TRUNCATE

SQL Server Execution Times:
CPU time = 0 ms,  elapsed time = 4 ms.
Completion time: 2025-02-27T11:16:28.1156110-05:00

You can easily see the difference.

During code reviews be sure to test the difference in the operations and see if the DELETE is better replaced by something.  If this is not feasible be sure to properly index for the DELETE operation for better efficiency. Remember to keep one key point in mind, because TRUNCATE is not logged, it cannot be rolled back UNLESS it is inside an explicit transaction. So use this power carefully!

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: How to Correct an MS SQL Server Database's Recovery Pending State?

clock February 13, 2025 06:56 by author Peter

One of the most potent database management systems for storing and retrieving data is SQL Server. The DBAs are occasionally unable to access the database because of its "Recovery Pending" state. This article's goal is to explain why it happens and provide suggestions for resolving the problem.

If one or more of the core files in a SQL database are corrupted, the database is considered damaged. The database will be marked with various states based on how serious the problem is. Among these states are:

  • Online: If a database data file is damaged during query execution, it will stay online.
  • Suspect: A database will be marked as a "suspect" if it cannot be recovered during SQL Server initialization.
  • Recovery Pending: The SQL Server places the database in a "Recovery Pending" state if it knows that a recovery needs to be done but is unable to begin due to an issue.

What Does SQL Server Recovery Pending State Mean?
The Recovery Pending state in MS SQL Server indicates that the database cannot start the recovery process due to missing files, resource constraints, or corruption issues. This is different from the Suspect state, which clearly shows there is corruption. Recovery Pending just means the recovery can't continue due to incomplete or inconsistent files.

Common Causes of SQL Server Recovery Pending State

  • Insufficient Disk Space: The database recovery process may halt due to a lack of space on the server.
  • Corrupted Log Files: Damaged or missing transaction log files can disrupt recovery.
  • Power Failure or Crash: Unexpected shutdowns can lead to database inconsistency.
  • Hardware Malfunctions: Disk errors or faulty storage devices can corrupt database files.
  • Improper Shutdowns: Forceful termination of SQL Server processes can result in uncommitted transactions.

When a database is in this state, it becomes inaccessible, and immediate action is required to restore normal operations.

How Does SQL Server Recovery Work?
When an SQL Server starts or a database is restarted, it goes through a recovery process with three phases:

  • Analysis: SQL Server reads the transaction log to determine which transactions need to be rolled forward or rolled back.
  • Redo (Roll Forward): All committed transactions from the log are reapplied to the database to ensure consistency.
  • Undo (Roll Back): Uncommitted transactions are rolled back to maintain a clean state.

Characteristics of a Database in the "Recovery Pending" State

  • Database Inaccessible: The database is not available for use by applications or users.
  • No Automatic Recovery: SQL Server is unable to initiate the automatic recovery process.
    • Error Messages: Common error messages related to this state include:
    • Error 9003: The log file is corrupt or missing.
    • Error 1813: SQL Server cannot attach the database because some files are missing.
    • Error 5123: The operating system returned an error while trying to access the database files.

How to Check if a Database is in Recovery Pending State?
To verify the state of your SQL Server database, execute the below query:
SELECT name, state_desc FROM sys.databases;

This query lists all databases and their current states. If the database is marked as "RECOVERY_PENDING" you need to fix the issue.

Methods to Fix SQL Server Recovery Pending State
1. Ensure Sufficient Disk Space: First, check if the drive with the database files has enough free space. If not, free up some space or move the files to a drive with more storage.

2. Check SQL Server Permissions: Make sure the SQL Server service account has the right permissions to access the database files. Wrong permissions can block the recovery process.

3. Manually Bring the Database Online: You can attempt to resolve the issue by setting the database to Emergency mode and performing repairs. Follow these steps:

Set the Database to Emergency Mode:
ALTER DATABASE [TestDatabase] SET EMERGENCY;

Perform Consistency Check: Run DBCC CHECKDB to check for corruption:
DBCC CHECKDB([TestDatabase]);

Repair the Database: If corruption is detected, use the REPAIR_ALLOW_DATA_LOSS option to repair the database:
ALTER DATABASE [TestDatabase] SET SINGLE_USER;
DBCC CHECKDB([TestDatabase], REPAIR_ALLOW_DATA_LOSS);
ALTER DATABASE [TestDatabase] SET MULTI_USER;


Note: The REPAIR_ALLOW_DATA_LOSS option may result in some data loss. Always back up your database before using this option.

4. Restore from a Backup: If you have a recent backup of the database, restoring it can be the safest way to resolve the issue:
RESTORE DATABASE [TestDatabase] FROM DISK = 'BackupFilePath.bak';

5. Use third-party recovery tool: When there are problems with SQL Server, like database corruption or the "Recovery Pending" state, manual troubleshooting methods, such as restoring from backups, running DBCC CHECKDB, or detaching and reattaching database files, may not always help. In these situations, special tools like Stellar Repair for MS SQL can be very important for recovering essential data accurately and quickly. In situations where data is accidentally deleted, specialized recovery techniques can help retrieve the deleted records during the database repair process, ensuring that important information is not permanently lost.

How SQL Database Repair Tools Can Assist in Recovery?

  • Repair tools help fix corrupted MDF and NDF files and restore the database without changing its original structure..
  • The tool can bypass the issues and recover the data even if SQL Server cannot bring it online.
  • Unlike manual methods that may risk losing data, the recovery process ensures no data loss..
  • Whether you are using an old or the latest version, the tool supports all versions and ensures smooth recovery.
  • In situations where data is accidentally deleted, specialized recovery techniques can help retrieve the deleted records during the database repair process, ensuring that important information is not permanently lost.

Preventive Measures

  • Regularly back up your database to avoid data loss during unforeseen issues.
  • Monitor disk usage and ensure sufficient free space.
  • Use reliable storage devices to minimize hardware-related corruption.
  • Always shut down SQL Server gracefully to prevent uncommitted transactions.

Conclusion
To fix a SQL Server database in a 'Recovery Pending' state, you need to find the root cause and take the right steps. Manual fixes like repairing the database or restoring from backups can help, but may not work for heavily corrupted databases. Always keep regular backups and check disk space to avoid these problems.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: Recognizing Accuracy in SQL Server Computations

clock February 6, 2025 08:17 by author Peter

Many database developers encounter unexpected discrepancies when performing calculations in SQL Server. One common issue arises when the same mathematical expression is evaluated differently. For instance, consider the following SQL Server code snippet:

DECLARE @Number1 AS DECIMAL(26,7) = 0.9009000;
DECLARE @Number2 AS DECIMAL(26,7) = 1.000000000;
DECLARE @Number3 AS DECIMAL(26,7) = 1000.00000000;
DECLARE @Result  AS DECIMAL(26,7);

SET @Result = (@Number1 * @Number2) / @Number3;

SELECT @Result; -- 0.0009000

SET @Result = (@Number1 * @Number2);

SET @Result = (@Result / @Number3);

SELECT @Result; -- 0.0009009


In the first case, the output is 0.0009000, while in the second case, the output is 0.0009009. This divergence raises the question: Why are the results different when the same calculation is performed?

Explanation. Single Step Calculation
In the first approach, the entire expression (@Number1 * @Number2) / @Number3 is computed in a single step:

  • SQL Server first computes the product of @Number1 and @Number2, which equals 0.9009000.
  • Next, it divides that result by @Number3 (1000.00000000).


The result of this division is affected by how SQL Server handles precision and rounding for decimal operations. This might introduce slight inaccuracies, leading to the outcome of 0.0009000.

Multiple Step Calculation

In the second approach, the operations are separated into two distinct steps:

  • First, the calculation @Number1 * @Number2 is executed and stored in @Result. This retains the value of 0.9009000.
  • Then, the variable @Result is divided by @Number3 in a separate statement.

This step-by-step division allows SQL Server to apply different rounding and precision rules, which can sometimes yield a more accurate result of 0.0009009.

Conclusion

The difference in outputs can often be attributed to the varying treatment of precision and rounding during calculations:

  • In a single-step calculation, SQL Server evaluates the entire expression at once, potentially altering precision during the process.
  • In a multiple-step calculation, SQL Server retains more precision through intermediate results, leading to a different output.

Resolution
To achieve consistent results in SQL Server calculations, developers should consider controlling precision explicitly. For example, applying rounding can help standardize outcomes:
SET @Result = ROUND((@Number1 * @Number2) / @Number3, 7);

By managing precision and rounding explicitly, programmers can avoid discrepancies and ensure that their numerical calculations yield the expected results. Understanding these nuances in SQL Server can lead to more reliable and accurate database operations.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: Differences Between TRUNCATE and DELETE in SQL Server

clock January 23, 2025 06:57 by author Peter

Although both TRUNCATE and DELETE can be used to remove data from a table in SQL Server, they differ greatly in terms of logging, performance, and how they affect the table structure. We examine these distinctions using sophisticated real-world examples below.

Deleting Specific Rows
You have a large customer database and need to delete records based on a condition, such as all customers from a specific country.

DELETE FROM Customers
WHERE Country = 'USA';

Why Use DELETE?

  • DELETE allows you to remove specific rows based on a WHERE condition.
  • It supports triggers, enabling additional actions like logging or cascading updates.
  • Referential integrity is preserved, ensuring foreign key constraints are respected.

Note. Since DELETE logs each row individually, it can be slower for large datasets, especially when dealing with a significant number of rows.

Resetting a Table for Data Migration

During data migration, you need to clear all rows from a table, such as the Users or Orders table, before inserting new data.
TRUNCATE TABLE Users;

Why Use TRUNCATE?

  • TRUNCATE quickly removes all rows from the table.
  • It resets the identity column, allowing new rows to start from the seed value.
  • The operation is much faster than DELETE as it does not log each row deletion.

Note. TRUNCATE cannot be used if there are foreign key constraints, even if those constraints are defined with ON DELETE CASCADE.

Managing Temporary Tables for Large Datasets
While working with large datasets, you need to clear the contents of a temporary table after processing it, such as Temp_SessionData.
TRUNCATE TABLE Temp_SessionData;

Why Use TRUNCATE?

  • It efficiently clears large amounts of data from the table.
  • No individual row logs are generated, making it a fast cleanup option.
  • Ideal for temporary tables where data retention is unnecessary.

Note. Using TRUNCATE avoids performance bottlenecks associated with row-by-row deletions.

Deleting Data with Referential Integrity
You need to delete all records in a parent table (e.g., Customers) while ensuring that related records in child tables (e.g., Orders) are also removed.
DELETE FROM Customers WHERE Country = 'USA';

Why Use DELETE?

  • DELETE respects foreign key constraints and triggers cascading deletions to dependent tables.
  • Cascading deletions, defined with ON DELETE CASCADE, ensure child rows (e.g., in Orders) are automatically deleted.

Note. While DELETE is slower than TRUNCATE, it ensures referential integrity and cascading actions across related tables.
Regular Data Resets for Large Tables

You regularly refresh data in a table (e.g., SalesData) from an external system and need to reset it completely.
TRUNCATE TABLE SalesData;

Why Use TRUNCATE?

  • TRUNCATE quickly wipes out all data and resets the identity column, starting new rows from the default seed value.
  • It is more efficient and minimalistic compared to DELETE.

Note. Check that no foreign key dependencies exist, as these will block the use of TRUNCATE.

Partial Table Cleanup with Complex Conditions
You need to clean up a specific subset of data from a large table where conditions involve multiple columns (e.g., inactive users who haven’t logged in for a year).
DELETE FROM Users

WHERE
    LastLogin < DATEADD(YEAR, -1, GETDATE())
    AND IsActive = 0;


Why Use DELETE?

  • DELETE enables precise removal of rows based on complex conditions.
  • It ensures that other unaffected rows remain intact.
  • Triggers can be used to log or audit the deletions.

Note. For large datasets, indexing the columns used in the WHERE clause can improve performance.

Archiving Old Data
You need to archive old transactional data from a table (e.g., Orders) into an archive table before removing it from the main table.
INSERT INTO ArchivedOrders
SELECT *
FROM Orders
WHERE OrderDate < '2023-01-01';

DELETE FROM Orders
WHERE OrderDate < '2023-01-01';


Why Use DELETE?

  • DELETE allows the selective removal of old data after archiving.
  • It ensures referential integrity for current data.
  • Archiving can be performed in batches to avoid locking issues on the main table.

Note. Using DELETE in combination with INSERT INTO helps retain historical data while managing table size.

Clearing Audit Logs Periodically
Use Case

Your application generates a large number of audit logs, and you periodically clear logs older than a specific timeframe to maintain performance.
TRUNCATE TABLE AuditLogs;

Why Use TRUNCATE?

  • Audit logs often do not require referential integrity, making TRUNCATE a fast and efficient option.
  • It clears all rows quickly without logging each deletion.
  • TRUNCATE minimizes the overhead on large tables with high write frequency.

Note. Check retention policies are implemented before truncating, as all data will be permanently removed.

Performance Considerations

Deleting a Large Number of Rows: DELETE can be slow for large tables since it logs each row deletion. To handle large data deletions more efficiently, you can break the operation into smaller chunks
    DELETE TOP (1000)
    FROM Customers
    WHERE Country = 'USA';


Truncating Large Tables: TRUNCATE is much faster because it doesn't log individual row deletions and deallocates entire pages, making it efficient for large-scale deletions.

Summary of When to Use Each

  • Use DELETE when
    • You need to delete specific rows based on conditions.
    • You need to trigger referential integrity checks or cascading deletions.
    • You need to ensure that triggers are fired.
    • You don't want to reset the identity column or change the table structure.
  • Use TRUNCATE when
    • You need to remove all rows from a table and reset the identity column.
    • There are no foreign key constraints.
    • You want faster performance and minimal logging for bulk deletions.

Now Let’s walk through a stock exchange scenario where you can apply both DELETE and TRUNCATE commands in SQL Server, depending on the requirements. This covers a realistic stock trading system scenario, including market operations, account management, and transaction logs.


In a stock exchange system, you might have multiple tables like,

  • Stocks (Information about stocks traded)
  • Trades (Transaction records of stock buys and sells)
  • Orders (Active orders for buying/selling stocks)
  • Users (Trader accounts and details)

 

We will look at how DELETE and TRUNCATE can be used for various operations in this stock exchange system, depending on whether we need to delete specific records, reset tables, or manage large datasets efficiently.

Deleting a Specific Stock Order from the Orders Table

Let’s say a trader cancels a buy/sell order. You need to delete the specific order record from the Orders table.

  • The trader places an order to buy 100 shares of the Company Jack&Jones.
  • The order is still active and hasn’t been filled.
  • The trader decides to cancel the order.

DELETE FROM Orders
WHERE OrderID = 12345;


Why use DELETE?

  • You are deleting a specific row that matches the condition (based on OrderID).
  • You need to ensure that any relevant foreign key constraints (e.g., relationship to Users or Stocks) are respected. If any cascades or actions need to be triggered (e.g., updating user balance or stock status), DELETE ensures that happens.
  • The operation is logged for transaction tracking, allowing you to monitor exactly what was deleted.

Impact
This operation is slow compared to TRUNCATE, especially if there are a large number of active orders in the system, but it is necessary for deleting specific rows based on user actions.

Resetting All Orders for the End of the Day

At the end of each trading day, the stock exchange needs to clear all orders from the Orders table to prepare for the next trading day. The system clears all records, regardless of whether the orders are pending or executed.

  • The stock exchange clears all pending orders after the market closes.
  • You want to quickly remove all rows from the Orders table to start fresh.

SQL Query
TRUNCATE TABLE Orders;

Why use TRUNCATE?

  • You want to remove all rows efficiently without worrying about individual row deletions.
  • This operation does not log individual row deletions (it’s minimally logged), making it much faster when dealing with large datasets.
  • It also resets any identity column (if there’s one for OrderID), which is useful if you want to restart the order numbering from the seed value the next day.

Consideration
Ensure that there are no foreign key constraints in place, or if there are, ensure TRUNCATE is allowed (i.e., no dependencies with cascading deletes).

Impact

  • Efficient for clearing large volumes of data, but be cautious if there are foreign key constraints that prevent truncating the table.
  • This is suitable for an end-of-day reset where all orders must be wiped out to prepare for the next day.

Clearing Historical Data for Trades
The exchange wants to archive the trades older than a year, as they are no longer relevant for active trading or reporting but need to be stored for historical purposes.
Trades that happened more than a year ago need to be archived into a backup system, and the records should be removed from the main Trades table.

SQL Query
DELETE FROM Trades
WHERE TradeDate < DATEADD(YEAR, -1, GETDATE());

Why use DELETE?

  • You need to delete specific records based on the condition (TradeDate < DATEADD(YEAR, -1, GETDATE())), so DELETE allows for precise control.
  • The operation respects foreign key constraints, ensuring that dependent data in other tables (such as order details or user information) is also managed correctly.

Consideration
If you have millions of trades, DELETE might be slow, but this is necessary if you want to keep the data that is still relevant (for example, trades made within the last year).

Impact
Deleting specific records ensures that important data (such as current trades) is not deleted by mistake, and you can archive old data efficiently.

Resetting All Stock Prices for a New Trading Day

On the stock exchange, stock prices need to be reset every morning to the opening prices of the day. You want to clear all the previous day’s data and set the new day's prices.

Scenario
Every morning, you reset the stock price data for each traded stock.
The previous day’s data is irrelevant for the new trading day, so it’s cleared.

SQL Query
TRUNCATE TABLE StockPrices;

Why use TRUNCATE?
You want to quickly remove all rows without worrying about specific conditions or the individual deletion of each stock's price.
It’s much more efficient than DELETE since it doesn’t log each row removal, which is ideal for large datasets where performance is crucial.

Consideration
If there are foreign key constraints or dependencies on the StockPrices table (e.g., historical trades), TRUNCATE may not be possible. In such cases, you would need to first delete or archive related data.

Impact
Faster performance compared to DELETE and useful for daily resets. This is a classic use of TRUNCATE when the data doesn't need to be retained across days.

Deleting Specific Trade Records Based on a Condition
Let’s say an anomaly or error occurred in the trading system where certain trades were mistakenly recorded (perhaps due to a bug or a trading error), and you need to delete them.

Scenario

  • A certain group of trades (e.g., trades that involve incorrect stock symbols or trade amounts) needs to be deleted.
  • These trades are identifiable based on a condition, such as a stock symbol mismatch or a trade amount exceeding a predefined limit.

SQL Query
DELETE FROM Trades
WHERE StockSymbol = 'INVALID' OR TradeAmount > 1000000;


Why use DELETE?

  • The operation deletes specific rows based on certain conditions, so DELETE is appropriate for this task.
  • Triggers (if any) would be fired to ensure related actions are performed (e.g., adjusting the trader's balance or reversing order statuses).

Consideration
This could be a slow operation if the trades table is very large and the condition affects a significant number of rows.

Impact
It’s essential to identify and delete only the erroneous rows based on conditions, so DELETE allows for precise control.

In stock exchange systems, TRUNCATE is usually reserved for bulk operations where all data can be removed quickly (such as resetting stock prices), while DELETE is used for more granular, specific removals or when data integrity constraints are involved (such as removing erroneous trades or orders).

HostForLIFEASP.NET SQL Server 2022 Hosting



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in