European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

SQL Server Hosting - HostForLIFE :: SQL Server Analytics-Powered Intelligent Inventory Management System

clock December 17, 2025 06:46 by author Peter

One of the most important duties for businesses in the retail, manufacturing, logistics, and e-commerce sectors is inventory management. Too little inventory leads to missed sales, whereas too much inventory raises storage costs. Conventional systems merely monitor reorder levels, purchases, and quantities. However, contemporary firms need something more sophisticated: deeper analytics, automated replenishment, anomaly detection, and forecasting.

These features are offered by a SQL Server Analytics-powered smart inventory management system. Organizations may create intelligent solutions that forecast demand, optimize stock levels, and lower overall operating costs thanks to integrated query performance, machine learning services, sophisticated reporting, and Angular-based dashboards.

This guide explains how to build a complete production-ready, AI-enhanced smart inventory system using:

  • SQL Server (inventory, analytics, ML)
  • ASP.NET Core (API layer)
  • Angular (frontend UI)
  • AI models (forecasting and anomaly detection)
  • Background jobs (automated replenishment)

This article is designed for beginner, intermediate, and senior developers.

1. Understanding Smart Inventory Management
A smart inventory system goes beyond storing stock count. It uses analytics and AI to support business decisions.

Key Features

  • Real-time inventory tracking
  • Automatic reorder suggestions
  • Demand forecasting
  • Low-stock alerts
  • Supplier performance analysis
  • ABC classification
  • Expiry and batch tracking
  • Sales velocity analytics
  • Safety stock calculation
  • Dashboard visualizations

Traditional systems offer CRUD operations. Smart systems offer insights.

2. High-Level Architecture
Here is the recommended architecture for a robust smart inventory solution:
Angular UI
    ↓
ASP.NET Core REST API
    ↓
SQL Server (Inventory, Analytics, ML Models)
    ↓
Python/R ML Scripts (Forecasting)
    ↓
Background Services (alerts, reorder engine)

Each component has a specific responsibility:

  • Angular: dashboards, charts, tables, alerts, user operations
  • ASP.NET Core: APIs for stock, suppliers, forecasting, notifications
  • SQL Server: stores stock data, purchase history, demand patterns, model outputs
  • Machine Learning Services: forecasting models (Python/R)
  • Background Jobs: periodic analytics refresh, reorder suggestions, threshold monitoring

3. Designing the Inventory Database
SQL Server stores inventory, sales, supplier data, and forecast results.

3.1 Items Table

CREATE TABLE Items (
    ItemId INT IDENTITY PRIMARY KEY,
    ItemName NVARCHAR(200),
    SKU NVARCHAR(100),
    Category NVARCHAR(100),
    SupplierId INT,
    ReorderLevel INT,
    SafetyStock INT,
    UnitPrice DECIMAL(18,2)
);

3.2 Stock Table
CREATE TABLE Stock (
    StockId INT IDENTITY PRIMARY KEY,
    ItemId INT,
    Quantity INT,
    LastUpdated DATETIME DEFAULT GETDATE()
);

3.3 Sales History Table
CREATE TABLE SalesHistory (
    SaleId INT IDENTITY PRIMARY KEY,
    ItemId INT,
    QuantitySold INT,
    SaleDate DATE
);

3.4 Forecast Results
CREATE TABLE ForecastedDemand (
    ForecastId INT IDENTITY PRIMARY KEY,
    ItemId INT,
    ForecastMonth DATE,
    ForecastQuantity INT
);

3.5 Reorder Suggestions
CREATE TABLE SuggestedOrders (
    SuggestionId INT IDENTITY PRIMARY KEY,
    ItemId INT,
    SuggestedQuantity INT,
    GeneratedAt DATETIME DEFAULT GETDATE()
);


4. Using SQL Server Analytics for Demand Forecasting
SQL Server Machine Learning Services lets you run Python or R forecasting directly inside stored procedures. This avoids exporting data to external systems.

Example: Monthly Demand Forecast Using Python
EXEC sp_execute_external_script
  @language = N'Python',
  @script = N'
import pandas as pd
from statsmodels.tsa.holtwinters import ExponentialSmoothing

df = InputDataSet
df["SaleDate"] = pd.to_datetime(df["SaleDate"])
df = df.resample("M", on="SaleDate").sum()

model = ExponentialSmoothing(df["QuantitySold"], trend="add", seasonal="add", seasonal_periods=12)
fit = model.fit()

forecast = fit.forecast(3)  # next 3 months

OutputDataSet = forecast.reset_index()
',
  @input_data_1 = N'SELECT SaleDate, QuantitySold FROM SalesHistory WHERE ItemId = 101'
WITH RESULT SETS ((ForecastMonth DATE, ForecastQuantity FLOAT));


Insert results into ForecastedDemand:
INSERT INTO ForecastedDemand(ItemId, ForecastMonth, ForecastQuantity)
EXEC ForecastStoredProcedure @ItemId = 101;


5. Building ASP.NET Core API Layer
ASP.NET Core exposes inventory data and forecast analytics to Angular.

5.1 Get Current Stock
[HttpGet("stock")]
public async Task<IActionResult> GetStock()
{
    var items = await _context.Stock
        .Include(x => x.Item)
        .Select(x => new {
            x.Item.ItemName,
            x.Quantity,
            x.Item.ReorderLevel,
            x.Item.SafetyStock
        })
        .ToListAsync();

    return Ok(items);
}

5.2 Get Forecast Results
[HttpGet("forecast/{itemId}")]
public async Task<IActionResult> GetForecast(int itemId)
{
    var result = await _context.ForecastedDemand
        .Where(x => x.ItemId == itemId)
        .OrderBy(x => x.ForecastMonth)
        .ToListAsync();

    return Ok(result);
}

5.3 Generate Reorder Suggestions
[HttpPost("suggest-reorder")]
public async Task<IActionResult> GenerateReorders()
{
    var items = await _context.Items.ToListAsync();
    var stock = await _context.Stock.ToListAsync();

    foreach(var item in items)
    {
        var qty = stock.First(s => s.ItemId == item.ItemId).Quantity;

        if(qty < item.ReorderLevel)
        {
            var suggestedQty = (item.ReorderLevel + item.SafetyStock) - qty;

            _context.SuggestedOrders.Add(new SuggestedOrder
            {
                ItemId = item.ItemId,
                SuggestedQuantity = suggestedQty
            });
        }
    }

    await _context.SaveChangesAsync();
    return Ok();
}


6. Angular Application Structure

Angular gives stock dashboards, charts, and alert screen.

Recommended modules:

  • DashboardModule
  • InventoryModule
  • SalesModule
  • ForecastModule
  • AlertsModule

Install Charts
npm install chart.js ngx-charts

7. Angular Service for Inventory API
inventory.service.ts
@Injectable({ providedIn: 'root' })
export class InventoryService {
  constructor(private http: HttpClient) {}

  getStock(): Observable<StockItem[]> {
    return this.http.get<StockItem[]>('/api/inventory/stock');
  }

  getForecast(id: number): Observable<Forecast[]> {
    return this.http.get<Forecast[]>(`/api/inventory/forecast/${id}`);
  }
}


export interface StockItem {
  itemName: string;
  quantity: number;
  reorderLevel: number;
  safetyStock: number;
}

export interface Forecast {
  forecastMonth: string;
  forecastQuantity: number;
}

8. Building Angular Dashboard
Stock Overview Component
stock-overview.component.ts

@Component({
  selector: 'app-stock-overview',
  templateUrl: './stock-overview.component.html'
})
export class StockOverviewComponent implements OnInit {

  displayedColumns = ['itemName','quantity','reorderLevel','safetyStock'];
  dataSource = new MatTableDataSource<StockItem>();

  constructor(private inventory: InventoryService) {}

  ngOnInit() {
    this.inventory.getStock().subscribe(res => {
      this.dataSource.data = res;
    });
  }
}


HTML:
<mat-card>
  <h2>Current Stock</h2>

  <table mat-table [dataSource]="dataSource" class="mat-elevation-z8">

    <ng-container matColumnDef="itemName">
      <th mat-header-cell *matHeaderCellDef> Item </th>
      <td mat-cell *matCellDef="let row">{{ row.itemName }}</td>
    </ng-container>

    <ng-container matColumnDef="quantity">
      <th mat-header-cell *matHeaderCellDef> Stock </th>
      <td mat-cell *matCellDef="let row">{{ row.quantity }}</td>
    </ng-container>

    <ng-container matColumnDef="reorderLevel">
      <th mat-header-cell *matHeaderCellDef> Reorder Level </th>
      <td mat-cell *matCellDef="let row">{{ row.reorderLevel }}</td>
    </ng-container>

    <ng-container matColumnDef="safetyStock">
      <th mat-header-cell *matHeaderCellDef> Safety Stock </th>
      <td mat-cell *matCellDef="let row">{{ row.safetyStock }}</td>
    </ng-container>

    <tr mat-header-row *matHeaderRowDef="displayedColumns"></tr>
    <tr mat-row *matRowDef="let row; columns: displayedColumns"></tr>

  </table>
</mat-card>


9. Forecast Chart in Angular
forecast-chart.component.ts

@Component({
  selector: 'app-forecast-chart',
  templateUrl: './forecast-chart.component.html'
})
export class ForecastChartComponent implements OnInit {

  lineChartData = [];
  lineChartLabels = [];

  constructor(private inventory: InventoryService) {}

  ngOnInit() {
    this.inventory.getForecast(101).subscribe(res => {
      this.lineChartLabels = res.map(r => r.forecastMonth);
      this.lineChartData = [
        {
          data: res.map(r => r.forecastQuantity),
          label: 'Forecasted Demand'
        }
      ];
    });
  }
}

HTML:
<mat-card>
  <h2>Demand Forecast</h2>

  <canvas baseChart
          [datasets]="lineChartData"
          [labels]="lineChartLabels"
          chartType="line">
  </canvas>
</mat-card>


10. Automated Reorder Engine

A background job (Hangfire or Quartz.NET) can run nightly:
Example Scheduled Job
public class ReorderJob
{
    private readonly InventoryService _service;

    public ReorderJob(InventoryService service)
    {
        _service = service;
    }

    public async Task Execute()
    {
        await _service.GenerateReorderSuggestions();
    }
}


Register with Hangfire:
RecurringJob.AddOrUpdate<ReorderJob>(
    "auto-reorder",
    job => job.Execute(),
    Cron.Daily);


11. Alerts and Notifications
Based on stock thresholds or forecasts, send alerts:

SQL trigger-based alerts
IF EXISTS (SELECT * FROM Stock WHERE Quantity < ReorderLevel)
INSERT INTO Alerts(ItemId, Message) VALUES (@ItemId, 'Low Stock');


Angular Alerts Screen

Use Material Snackbar or Notification Panel to display alerts.

12. Performance Analytics
A smart system must analyze:

  • Inventory turnover rate
  • Slow-moving items
  • Fast-moving items
  • Supplier lead time trends
  • Stock aging reports
  • Purchase frequency

Example SQL for turnover:
SELECT ItemId,
       SUM(QuantitySold) / SUM(Quantity) AS TurnoverRate
FROM SalesHistory
JOIN Stock ON SalesHistory.ItemId = Stock.ItemId
GROUP BY SalesHistory.ItemId;

Angular visuals can show:

  • Doughnut charts
  • Column charts
  • Line graphs

13. AI Enhancements
13.1 Predictive Restocking
Use forecasting + safety stock.

13.2 Anomaly Detection
Detect unusual sales patterns.

13.3 Price Optimization

Suggest ideal selling price.

13.4 Stock-Out Prediction
Forecast when stock will reach zero.

13.5 Supplier Analytics
Predict late deliveries.

14. Security Best Practices
Secure API with JWT

  • Use stored procedures to avoid SQL injection
  • Encrypt sensitive data
  • Implement row-level security if multi-tenant
  • Use HTTPS for Angular app and API

15. Deployment Strategy
Recommended Setup

  • SQL Server: Azure SQL or VM
  • ASP.NET Core: Azure App Service
  • Angular: Azure Static Web App or Blob Storage website
  • ML Services: Azure Container Apps or SQL ML Services
  • Background jobs: Azure Functions or Hangfire

HostForLIFE.eu SQL Server 2022 Hosting
HostForLIFE.eu is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

 



European SQL Server 2022 Hosting - HostForLIFE :: SMTP Configuration Guide and SQL Server Database Mail Setup

clock December 9, 2025 09:26 by author Peter

When it comes to configuring SQL Server Database Mail, this article is ideal for novices. Additionally, it can be utilized in documentation or training sessions for novice developers.

Verify the Current SQL Server SMTP Configuration

Goals
to confirm whether SQL Server Database Mail has any SMTP profiles set up.

USE msdb;
GO
SELECT * FROM dbo.sysmail_profile;

Create a New Database Mail Profile
Purpose
To create a new Database Mail profile in SQL Server that can be used to send emails via SMTP accounts.

Query
EXEC msdb.dbo.sysmail_add_profile_sp
    @profile_name = 'MailCampaign',
    @description = 'Profile for sending SQL Server emails';


Explanation of Parameters

@profile_name = 'MailCampaign'

    This is the name of your Database Mail profile.

    A profile is like a container for SMTP accounts that SQL Server will use to send emails.

    In your query, 'MailCampaign' is the name you chose for the profile.

    You can change this to any name you want, for example: 'SQLMailer', 'AdminMail', etc.

@description = 'Profile for sending SQL Server emails'

    Optional text describing the purpose of this profile.

    Helps to identify the profile later if you have multiple profiles.

Create SMTP Configuration for Database Mail
Purpose
To configure an SMTP account in SQL Server that will be used by a Database Mail profile to send emails.

Query
EXEC msdb.dbo.sysmail_add_account_sp
    @account_name = 'Mail Campaign',
    @description = 'SMTP account for MailCampaign',
    @email_address = '[email protected]', -- Your email Id
    @display_name = 'NoReply',                   -- Your display name in email
    @replyto_address = '[email protected]', -- Reply-to email Id
    @mailserver_name = 'smtp.sendgrid.net',      -- SMTP server
    @port = 587,                                 -- SMTP port (587 for TLS/STARTTLS)
    @username = 'apikey',                        -- SMTP username (SendGrid uses 'apikey')
    @password = 'SUCXUo5r3uCBZdlw',             -- SMTP password or API key
    @enable_ssl = 0;                             -- Enable SSL/TLS (1 = Yes, 0 = No)


Next Step: Map Account to Profile

After creating the SMTP account, you need to associate it with your Database Mail profile:

Query

EXEC msdb.dbo.sysmail_add_profileaccount_sp
    @profile_name = 'MailCampaign',
    @account_name = 'Mail Campaign',
    @sequence_number = 1;


Send a Test Email
Purpose
To verify that the SMTP configuration and Database Mail profile are working correctly by sending a test email.

Query
EXEC msdb.dbo.sp_send_dbmail
    @profile_name = 'MailCampaign',
    @recipients = '[email protected]',
    @subject = 'Test Email',
    @body = 'Database Mail is now configured correctly!';


SQL Server Database Mail Setup Script
USE msdb;
GO
SELECT *
FROM dbo.sysmail_profile;
--Step 1:
EXEC msdb.dbo.sysmail_add_profile_sp
    @profile_name = 'MailCampaign',
    @description = 'Profile for sending SQL Server emails';
--Step 2:
EXEC msdb.dbo.sysmail_add_account_sp
    @account_name = 'Mail Campaign',
    @description = 'SMTP account for MailCampaign',
    @email_address = '[email protected]', -- Your email Id
    @display_name = 'NoReply',                   -- Your display name in email
    @replyto_address = '[email protected]', -- Reply-to email Id
    @mailserver_name = 'smtp.sendgrid.net',      -- SMTP server
    @port = 587,                                 -- SMTP port (587 for TLS/STARTTLS)
    @username = 'apikey',                        -- SMTP username (SendGrid uses 'apikey')
    @password = 'SUCXUo5r3uCBZdlw',             -- SMTP password or API key
    @enable_ssl = 1;                             -- Enable SSL/TLS (1 = Yes, 0 = No)
--Step 3:
EXEC msdb.dbo.sysmail_add_profileaccount_sp
    @profile_name = 'MailCampaign',
    @account_name = 'Mail Campaign',
    @sequence_number = 1;
--or
EXEC msdb.dbo.sysmail_add_principalprofile_sp
    @profile_name = 'MailCampaign',
    @principal_name = 'public',
    @is_default = 1;
--Step 4:
EXEC msdb.dbo.sp_send_dbmail
    @profile_name = 'MailCampaign',
    @recipients = '[email protected]',
    @subject = 'Test Email',
    @body = 'Database Mail is now configured correctly!';
SELECT * FROM msdb.dbo.sysmail_event_log ORDER BY log_date DESC;
--Your Reference Script
SELECT * FROM msdb.dbo.sysmail_account;
SELECT * FROM msdb.dbo.sysmail_profile;
SELECT * FROM msdb.dbo.sysmail_profileaccount;
SELECT * FROM msdb.dbo.sysmail_event_log ORDER BY log_date DESC;


Explanation
1. Step 0: Check if any Database Mail profiles already exist.
2. Step 1: Creates a new profile (MailCampaign) for sending emails.
3. Step 2: Adds an SMTP account linked to the profile (using SendGrid example).
4. Step 3: Maps the SMTP account to the profile and optionally sets it as the default for all users.
5. Step 4: Sends a test email to verify the configuration.
6. Step 5: Checks the event log and reference tables to troubleshoot or confirm successful setup.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: How to Lower Lock Contention in Financial Ledger Tables in SQL Server?

clock December 5, 2025 07:34 by author Peter

Financial ledger tables are some of the most heavily accessed structures in enterprise systems. They store critical accounting events such as journal entries, postings, adjustments, settlements, accruals, and reversals. These tables often experience very high write concurrency, especially in:

  • ERP posting engines
  • High-volume billing systems
  • Payment gateways
  • Reconciliation services
  • End-of-day and batch processing jobs

As concurrency increases, SQL Server frequently faces lock contention, leading to:

  • Long-running transactions
  • Deadlocks
  • Slow inserts/updates
  • Users experiencing blocking
  • Increased TempDB usage
  • Timeouts in APIs or stored procedures

This article explains why ledger tables suffer from lock contention, and provides practical, implementable techniques to reduce blocking, improve throughput, and increase system stability.

Why Financial Ledger Tables Face High Lock Contention?

1. Constant Writes
Ledger tables often record every financial event.
Even medium systems may generate:

  • 50–200 records per user action
  • Thousands of rows per batch job
  • Millions of rows per day

Continuous writes cause frequent locks.

2. Wide Rows With Many Columns
Ledger rows often contain:

  • CompanyId
  • AccountId
  • Debit/Credit
  • Amount
  • Currency
  • ReferenceId
  • Audit columns

Wide rows increase the cost of locking and reduce throughput.

3. Heavy Use Of Aggregates
Financial reporting queries run concurrently with postings:
SELECT AccountId, SUM(Debit) - SUM(Credit)
FROM Ledger
WHERE AccountingPeriod = '2025-01'
GROUP BY AccountId;


Such queries escalate to locking large ranges of rows.

4. Range Locks Due To Non-Optimized Indexing

If AccountId or PostingDate is not properly indexed, SQL Server uses:

  • Table scans
  • Key-range locks
  • Intent locks

These escalate quickly under concurrency.

5. Long Transactions
Posting operations often perform multiple validations and write across several tables.

  • Long transactions hold locks longer, increasing blocking.
  • Lock Contention Symptoms In Ledger Tables

Common Issues

  • Blocking chains
  • Deadlocks involving ledger rows
  • LCK_M_S, LCK_M_U, LCK_M_X waits
  • WRITELOG and PAGELATCH waits
  • API timeouts during accounting operations
  • Transaction log growth due to long-running transactions

Typical Indicators
SELECT * FROM sys.dm_tran_locks
WHERE resource_associated_entity_id = OBJECT_ID('Ledger');

SELECT * FROM sys.dm_os_waiting_tasks
WHERE wait_type LIKE 'LCK%';

High frequency of these waits confirms contention.

Technique 1: Use Appropriate Isolation Levels
Default Behavior
Most systems run in READ COMMITTED, causing shared locks during reads.

Recommended
Enable READ_COMMITTED_SNAPSHOT (RCSI):
ALTER DATABASE FinanceDB SET READ_COMMITTED_SNAPSHOT ON;

Benefits:

  • Readers do not block writers
  • Writers do not block readers
  • Ledger reporting queries become non-blocking

This single change can reduce contention by 70–90%.

When To Use Snapshot Isolation?

If high consistency is needed for accounting operations:
ALTER DATABASE FinanceDB SET ALLOW_SNAPSHOT_ISOLATION ON;

Then in code:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;

Use it for:

  • Posting engines
  • Settlement jobs
  • Reconciliation processes

Technique 2: Narrow The Update Scope

Problem
Some ledger updates touch many rows at once:
UPDATE Ledger
SET IsReconciled = 1
WHERE AccountId = @Acct AND PostingDate = @Date;

This creates:

  • Key-range locks
  • Long blocking spans
  • Update lock escalations

Solution
Break the update into small batches:
WHILE 1=1
BEGIN
    WITH cte AS (
        SELECT TOP (500) LedgerId
        FROM Ledger
        WHERE AccountId = @Acct
          AND PostingDate = @Date
          AND IsReconciled = 0
    )
    UPDATE cte SET IsReconciled = 1;

    IF @@ROWCOUNT < 500 BREAK;
END


Benefits:

  • Smaller locks
  • Faster release of resources
  • Lower deadlock probability

Technique 3: Use Optimized Indexing To Reduce Range Locks
Bad Indexing Pattern
Ledger tables often have an index like:
CREATE INDEX IX_Ledger_AccountId ON Ledger(AccountId);

Missing PostingDate or JournalEntryId forces range locks.

Recommended Covering Index
CREATE INDEX IX_Ledger_Account_Date
ON Ledger(AccountId, PostingDate)
INCLUDE (Debit, Credit, Amount);


Why this helps:

  • Converts table scans into seeks
  • Reduces lock footprints
  • Prevents range locks from escalating

Partitioning Bonus
If table is huge (hundreds of millions):

Partition by PostingDate:
PARTITION BY RANGE (PostingDate)
This physically separates ledger rows, reducing lock footprint even further.

Technique 4: Use Insert-Only Patterns (Immutable Ledger Design)
The best ledger models treat entries as immutable:

  • No UPDATE
  • No DELETE
  • Only INSERT

Benefits:

  • No update locks
  • No deadlocks on row modifications
  • Append-only architecture scales extremely well

Example: Instead Of Updating Balances

Bad:
UPDATE AccountBalance
SET ClosingBalance = ClosingBalance + @Amt
WHERE AccountId = @Acct;


Good:
INSERT INTO AccountBalanceHistory(AccountId, AmountChange, Timestamp)
VALUES (@Acct, @Amt, GETUTCDATE());


Then compute balances in reporting layer or via materialized views.

This pattern removes 90% of lock contention.

Technique 5: Reduce Transaction Length
Problem
Posting operations often run like this:
BEGIN TRAN;

-- validations
-- external API call
-- logging
-- balance adjustment
-- final insert

COMMIT;

Long-running external calls hold locks for seconds.

Solution
Move non-critical logic outside the transaction:
BEGIN TRAN;

-- minimal required operations
INSERT INTO Ledger(...)
INSERT INTO Journal(...)

COMMIT;

-- post-commit logging
-- send notification
-- update cache
-- call external APIs


This shortens lock durations drastically.

Technique 6: Use Row-Versioning To Reduce Write Contention
Add a RowVersion column

ALTER TABLE Ledger
ADD RowVer ROWVERSION;

Use optimistic concurrency:

UPDATE Ledger
SET Amount = @Amt
WHERE LedgerId = @Id AND RowVer = @OldVersion;


If the row changed, SQL Server returns 0 rows.
Client retries instead of blocking.

Technique 7: Move Operational Queries To Replica
If you use Availability Groups:
Primary = Accept writes
Secondary replicas = Handle reads


Reporting queries such as:
SELECT * FROM Ledger WHERE PostingDate BETWEEN ...

should be routed to replicas, not the primary.

This removes all read locks on the primary ledger.

Technique 8: Use Scalable Posting Patterns
Pattern 1: FIFO Posting Queue
Producer pushes journal requests to a queue:

  • Service Bus
  • Kafka
  • SQL Queue table

Consumer processes in controlled batches, reducing contention.

Pattern 2: Micro-batches
Process ledger writes in slices:
BatchSize = 1000

Best for high-traffic financial systems.

Pattern 3: Sharded Ledger Tables

Split ledger by:

  • Company
  • Account range
  • Region

This allows parallelism and reduces hot rows.

End-To-End Example: Optimizing Journal Posting

Original (High Contention)

BEGIN TRAN

INSERT INTO Ledger(AccountId, Debit, Credit, PostingDate)
VALUES (@Account, @Debit, @Credit, @Date)

UPDATE AccountBalance
SET Balance = Balance + @NetAmount
WHERE AccountId = @Account

COMMIT


Optimized

  • Use RCSI
  • Insert-only pattern
  • Batch account balance updates

BEGIN TRAN

INSERT INTO Ledger(AccountId, Debit, Credit, PostingDate)
VALUES (@Account, @Debit, @Credit, @Date)

INSERT INTO BalanceDelta(AccountId, AmountChange)
VALUES (@Account, @NetAmount)

COMMIT


A background worker aggregates BalanceDelta periodically:
UPDATE AccountBalance
SET Balance = Balance + d.AmountChange
FROM BalanceDelta d
WHERE AccountBalance.AccountId = d.AccountId

DELETE BalanceDelta WHERE Processed = 1;


This eliminates hot rows and reduces contention dramatically.
Diagram: Ledger Lock Contention vs Optimized Architecture

Before Optimization

Client Requests → Ledger Table (Hotspot)
                  ↑       ↓
                Updates  Aggregates
                  ↑       ↓
               Blocking / Deadlocks


After Optimization
Client → Ledger (Insert-Only) → Processed Quickly
Client → BalanceDelta → Background Worker → AccountBalance (Low Contention)
Reports → Secondary Replica (No Locks On Primary)


This architecture is used by modern ERP and billing systems.

Conclusion

Financial ledger tables face extreme lock contention because of:

  • High write concurrency
  • Frequent aggregates
  • Wide rows
  • Bad indexing
  • Long transactions

By applying the techniques in this guide, RCSI, batching, optimized indexing, insert-only design, optimistic concurrency, posting queues, and replicas, you can reduce contention by 80–95%, enabling smooth and scalable financial operations.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting - HostForLIFE :: Building a Table for the SQL Rule Engine

clock November 24, 2025 07:37 by author Peter

Instead of using hard-coded processes, you can build validation logic as data using a SQL based Rule Engine.  In ERP, CRM, WMS, finance, banking, healthcare, and dynamic form engines where business rules are constantly changing yet deployments are costly or time-consuming this is quite helpful.  This article describes how to create a dynamic rule engine that is fully stored in SQL using:

  • Rule definition tables
  • Condition groups
  • Dynamic SQL evaluation
  • Parameter injection
  • Runtime orchestration
  • Exception logging
  • Flowcharts + architecture diagrams

Core Problem
Business logic changes often:

  • “If Quantity > Stock → throw error”
  • “If Customer.CreditLimit < Order.Total → fail”
  • “If Item.Category = ‘Hazardous’ → require special approval”
  • “If Vendor is blocked → stop PO creation”

Traditionally:
Developers change code → deploy → test → release.

Better approach:
Store rules in tables → interpret them dynamically → no deployments needed.

Rule Engine Table Design
Below is a battle-tested schema used in real enterprise systems.

Rule Table (Rule Master)
Stores the rule metadata.

CREATE TABLE RuleMaster (
    RuleId INT IDENTITY PRIMARY KEY,
    RuleName VARCHAR(200),
    EntityName VARCHAR(200),         -- “SalesOrder”, “Stockline”, etc.
    IsActive BIT DEFAULT 1,
    Severity VARCHAR(30),            -- "Error", "Warning", "Info"
    ErrorMessage VARCHAR(500),
    CreatedDate DATETIME2 DEFAULT SYSDATETIME()
);

Examples:
“SO_Total_Validation”
“Stock_Availability_Check”
“PO_Block_Vendor_Validation”


Rule Condition Groups
Groups allow AND/OR nesting.

CREATE TABLE RuleConditionGroup (
    GroupId INT IDENTITY PRIMARY KEY,
    RuleId INT FOREIGN KEY REFERENCES RuleMaster(RuleId),
    GroupOperator VARCHAR(5) CHECK (GroupOperator IN ('AND','OR'))
);

Example:
Group 1: Quantity > 0 AND Price > 0
Group 2: Customer.CreditLimit < Order.Total

Rule Conditions
Each group contains conditions stored as rows, not code.

CREATE TABLE RuleCondition (
    ConditionId INT IDENTITY PRIMARY KEY,
    GroupId INT FOREIGN KEY REFERENCES RuleConditionGroup(GroupId),
    LeftOperand VARCHAR(200),        -- Field name, JSON path, SQL expression
    Operator VARCHAR(10),            -- =, <>, >, <, >=, LIKE, IN, BETWEEN, etc.
    RightOperand VARCHAR(200),       -- Literal value or field-to-field
    DataType VARCHAR(20),            -- Int, Decimal, Date, NVarchar
);

Example rows:

LeftOperandOperatorRightOperandDataType

OrderTotal

>

CreditLimit

Decimal

Status

<>

'Closed'

NVARCHAR

Quantity

<=

StockQty

Int

Rule Orchestration Table (Optional but Recommended)

Allows mapping rules to events:
CREATE TABLE RuleEventMapping (
    Id INT IDENTITY PRIMARY KEY,
    RuleId INT FOREIGN KEY REFERENCES RuleMaster(RuleId),
    EventName VARCHAR(200)           -- "OnCreate", "OnUpdate", "OnSubmit"
);

Dynamic SQL Generator (Core Engine)
Below is a simplified stored procedure that:

Loads rules

  • Loops through condition groups
  • Builds dynamic SQL
  • Evaluates TRUE/FALSE
  • Returns message

Stored Procedure Skeleton
CREATE PROCEDURE ExecuteRuleEngine
@EntityName VARCHAR(200),
@EventName VARCHAR(100),
@JsonInput NVARCHAR(MAX)        -- entity payload from APIASBEGIN
SET NOCOUNT ON;

DECLARE @Errors TABLE (RuleId INT, Message VARCHAR(500));

SELECT R.RuleId, R.RuleName, R.Severity, R.ErrorMessage
INTO #Rules
FROM RuleMaster R
JOIN RuleEventMapping M ON R.RuleId = M.RuleId
WHERE R.EntityName = @EntityName
  AND M.EventName = @EventName
  AND R.IsActive = 1;

DECLARE @RuleId INT;

DECLARE rule_cursor CURSOR FOR
    SELECT RuleId FROM #Rules;

OPEN rule_cursor;
FETCH NEXT FROM rule_cursor INTO @RuleId;

WHILE @@FETCH_STATUS = 0
BEGIN
    IF dbo.EvaluateRule(@RuleId, @JsonInput) = 0
    BEGIN
        INSERT INTO @Errors
        SELECT RuleId, ErrorMessage FROM #Rules WHERE RuleId = @RuleId;
    END

    FETCH NEXT FROM rule_cursor INTO @RuleId;
END

CLOSE rule_cursor;
DEALLOCATE rule_cursor;

SELECT * FROM @Errors;
END


Core Function: EvaluateRule()
This dynamically evaluates a rule using all its conditions.
CREATE FUNCTION EvaluateRule
(
@RuleId INT,
@JsonInput NVARCHAR(MAX)
)
RETURNS BIT
ASBEGIN
DECLARE @SQL NVARCHAR(MAX) = '';
DECLARE @Result BIT = 1;

SELECT @SQL = STRING_AGG(
    '(' +
    CASE DataType
        WHEN 'Int' THEN 'CAST(JSON_VALUE(@JsonInput, ''$.' + LeftOperand + ''') AS INT)'
        WHEN 'Decimal' THEN 'CAST(JSON_VALUE(@JsonInput, ''$.' + LeftOperand + ''') AS DECIMAL(18,2))'
        WHEN 'Date' THEN 'CAST(JSON_VALUE(@JsonInput, ''$.' + LeftOperand + ''') AS DATETIME2)'
        ELSE 'JSON_VALUE(@JsonInput, ''$.' + LeftOperand + ''')'
    END
    + ' ' + Operator + ' ' +
    CASE
        WHEN LEFT(RightOperand,1)='@' THEN 'JSON_VALUE(@JsonInput, ''$.' + SUBSTRING(RightOperand,2,200) + ''')'
        ELSE QUOTENAME(RightOperand,'''')
    END + ')',
    ' AND '
)
FROM RuleCondition RC
JOIN RuleConditionGroup G ON RC.GroupId = G.GroupId
WHERE G.RuleId = @RuleId;

DECLARE @FinalSQL NVARCHAR(MAX) = 'SELECT CASE WHEN ' + @SQL + ' THEN 1 ELSE 0 END';

EXEC sp_executesql @FinalSQL, N'@JsonInput NVARCHAR(MAX)', @JsonInput=@JsonInput OUTPUT;

RETURN @Result;
END


This function:

  • Converts JSON data into SQL values
  • Generates expressions like:

(CAST(JSON_VALUE(@json,'$.Quantity') AS INT) > CAST(JSON_VALUE(@json,'$.StockQty') AS INT))
AND
(JSON_VALUE(@json,'$.Status') <> 'Closed')

Evaluates the formula
Returns 1 or 0

Example: Real Rule Engine in Action
Rule

If OrderTotal > CreditLimit OR Customer is blocked → fail.

Conditions stored in database

GroupLeftOperatorRightDataType

1

OrderTotal

>

CreditLimit

Decimal

1

IsBlocked

=

1

Int

Generated SQL
SELECT CASE WHEN
(
  CAST(JSON_VALUE(@json,'$.OrderTotal') AS DECIMAL(18,2)) >CAST(JSON_VALUE(@json,'$.CreditLimit') AS DECIMAL(18,2))
)
OR
(
  CAST(JSON_VALUE(@json,'$.IsBlocked') AS INT) = 1
)
THEN 0 ELSE 1 END


Logging & Monitoring
Use:
CREATE TABLE RuleEngineLog (
RunId BIGINT IDENTITY PRIMARY KEY,
RuleId INT,
EntityName VARCHAR(200),
InputPayload NVARCHAR(MAX),
Result BIT,
Message VARCHAR(500),
CreatedDate DATETIME2 DEFAULT SYSDATETIME()
);


Best Practices
Do

  • Pre-validate JSON schema using ISJSON
  • Cache rule metadata
  • Avoid string concatenation inside cursor
  • Create reusable SQL UDFs (e.g., CompareValues)
  • Add unit tests for each rule

Avoid

  • Very large nested OR conditions
  • Using dynamic SQL inside loops without batching
  • Storing complex formulas directly in text fields

Final Notes
This SQL Rule Engine design is:

  • Flexible
  • Enterprise-grade
  • Deployment-free
  • Extensible
  • Highly maintainable

Perfect for ERP, WMS, Finance, Insurance, Enterprise SaaS

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: Creating a Data Versioning System that stores each record's history (SQL Server +.NET)

clock November 19, 2025 07:18 by author Peter

A data versioning system records every modification made to your information, including who made what changes, when they were made, and how to audit or restore previous versions. Audit, debugging, GDPR and financial compliance, and capabilities like "time travel," record revert, and change comparisons all depend on this.

With several implementation alternatives (built-in temporal tables, manual historical tables with triggers, and application-level capture), ER diagrams, flowcharts, SQL scripts, ASP.NET Core patterns, and best practices, this paper provides a workable, production-ready design.

Two main approaches (summary)

  • System-versioned temporal tables (SQL Server feature, simple, performant, automatic).
    • Pros: automatic row-versioning, built-in time travel query, efficient.
    • Cons: less flexible metadata (who/why), harder to store JSON diffs, needs SQL Server 2016+.
  • Manual history tables + triggers / stored procedures.
    • Pros: full control (store user, reason, JSON diffs, tx ids), easier to extend.
    • Cons: more code, need to manage triggers and retention.

Additionally, application-level capture (EF Core interceptors, change tracker) can supplement to include user/context info and business logic.

Option A: System-versioned temporal table (recommended when available)
1. Create temporal table

CREATE TABLE dbo.Customer
(
    CustomerId   UNIQUEIDENTIFIER NOT NULL PRIMARY KEY DEFAULT NEWID(),
    Name         NVARCHAR(200) NOT NULL,
    Email        NVARCHAR(200),
    Balance      DECIMAL(18,2) DEFAULT 0,
    SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL,
    SysEndTime   DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL,
    PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime)
)
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.CustomerHistory));

SQL Server creates dbo.CustomerHistory automatically with the row versions. But CustomerHistory does not include ChangedBy or ChangeReason.

2. Add audit metadata (who/why)
You can store user info in a separate audit table or extend approach by writing triggers to insert augmented history or use an application-level write to an audit table.
Example: keep CustomerHistoryMeta where you store (HistoryRowPK, ChangedBy, ChangeReason, TxId) linked to history rows using SysStartTime and CustomerId as keys.

3. Query history (time travel)
-- Get record as of a point in time
SELECT *
FROM dbo.Customer
FOR SYSTEM_TIME AS OF '2025-11-01 10:00:00'
WHERE CustomerId = '...';

-- Get all versions
SELECT * FROM dbo.Customer
FOR SYSTEM_TIME ALL
WHERE CustomerId = '...'
ORDER BY SysStartTime;


4. Revert to historical version (pattern)
To revert, read the historical row and insert a new current row (or update current row) — do not “restore” history row directly; write a new change so revert is also tracked.

Option B: Manual history table + trigger (flexible, full metadata)
1. Schema (example for table Customer)
CREATE TABLE dbo.Customer
(
    CustomerId UNIQUEIDENTIFIER PRIMARY KEY DEFAULT NEWID(),
    Name NVARCHAR(200),
    Email NVARCHAR(200),
    Balance DECIMAL(18,2)
);

CREATE TABLE dbo.CustomerHistory
(
    HistoryId BIGINT IDENTITY PRIMARY KEY,
    CustomerId UNIQUEIDENTIFIER NOT NULL,
    ChangeType CHAR(1) NOT NULL, -- I/U/D
    ChangedBy NVARCHAR(200) NULL,
    ChangeReason NVARCHAR(500) NULL,
    ChangedAt DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
    PayloadJson NVARCHAR(MAX) NOT NULL, -- full row snapshot as JSON
    TxId UNIQUEIDENTIFIER NOT NULL DEFAULT NEWID()
);

CREATE INDEX IX_CustomerHistory_CustomerId ON dbo.CustomerHistory(CustomerId);
CREATE INDEX IX_CustomerHistory_TxId ON dbo.CustomerHistory(TxId);


2. Trigger to capture changes
CREATE TRIGGER trg_Customer_Audit
ON dbo.Customer
AFTER INSERT, UPDATE, DELETE
AS
BEGIN
    SET NOCOUNT ON;

    DECLARE @txid UNIQUEIDENTIFIER = NEWID();
    DECLARE @changedBy NVARCHAR(200) = SUSER_SNAME(); -- replace via CONTEXT_INFO if app sets

    -- INSERTED rows -> Insert
    INSERT INTO dbo.CustomerHistory (CustomerId, ChangeType, ChangedBy, ChangeReason, PayloadJson, TxId)
    SELECT i.CustomerId, 'I', @changedBy, NULL, (SELECT i.* FOR JSON PATH, WITHOUT_ARRAY_WRAPPER), @txid
    FROM inserted i;

    -- UPDATED rows -> Update (capture new snapshot or both old & new as needed)
    INSERT INTO dbo.CustomerHistory (CustomerId, ChangeType, ChangedBy, ChangeReason, PayloadJson, TxId)
    SELECT u.CustomerId, 'U', @changedBy, NULL, (SELECT u.* FOR JSON PATH, WITHOUT_ARRAY_WRAPPER), @txid
    FROM inserted u
    WHERE EXISTS (SELECT 1 FROM deleted d WHERE d.CustomerId = u.CustomerId);

    -- DELETED rows -> Delete
    INSERT INTO dbo.CustomerHistory (CustomerId, ChangeType, ChangedBy, ChangeReason, PayloadJson, TxId)
    SELECT d.CustomerId, 'D', @changedBy, NULL, (SELECT d.* FOR JSON PATH, WITHOUT_ARRAY_WRAPPER), @txid
    FROM deleted d;
END
Notes


SUSER_SNAME() gives SQL login; for application user prefer setting SESSION_CONTEXT or CONTEXT_INFO before DML and reading it in trigger (see below).

PayloadJson stores full state snapshot; you may instead store only changed columns (diff) if storage is a concern.

3. Passing application user into trigger
Before executing DML, set session context from application:
EXEC sp_set_session_context 'AppUser', '[email protected]';
Then inside trigger:

DECLARE @changedBy NVARCHAR(200) = CONVERT(NVARCHAR(200), SESSION_CONTEXT(N'AppUser'));

This makes ChangedBy accurate.

4. Stored procedures to query history
CREATE PROCEDURE usp_GetCustomerHistory
    @CustomerId UNIQUEIDENTIFIER
AS
BEGIN
    SELECT HistoryId, ChangeType, ChangedBy, ChangeReason, ChangedAt, PayloadJson
    FROM dbo.CustomerHistory
    WHERE CustomerId = @CustomerId
    ORDER BY ChangedAt DESC;
END


5. Revert procedure (create new record state from history)
CREATE PROCEDURE usp_RevertCustomerToHistory
    @HistoryId BIGINT,
    @RevertedBy NVARCHAR(200)
AS
BEGIN
    DECLARE @payload NVARCHAR(MAX);
    SELECT @payload = PayloadJson FROM dbo.CustomerHistory WHERE HistoryId = @HistoryId;

    -- parse JSON into columns and update current table
    UPDATE dbo.Customer
    SET Name = JSON_VALUE(@payload, '$.Name'),
        Email = JSON_VALUE(@payload, '$.Email'),
        Balance = TRY_CAST(JSON_VALUE(@payload, '$.Balance') AS DECIMAL(18,2))
    WHERE CustomerId = JSON_VALUE(@payload, '$.CustomerId');

    -- insert a history record marking revert
    INSERT INTO dbo.CustomerHistory (CustomerId, ChangeType, ChangedBy, ChangeReason, PayloadJson, TxId)
    VALUES (JSON_VALUE(@payload, '$.CustomerId'), 'U', @RevertedBy, 'Revert to HistoryId ' + CAST(@HistoryId AS NVARCHAR(20)), @payload, NEWID());
END


Application-level capture (EF Core interceptor) - add user / reason
If you use EF Core, intercept SaveChanges to write audit to history table so you have full contextual data (user id, IP, reason).

Example (simplified)
public class AuditSaveChangesInterceptor : SaveChangesInterceptor
{
    private readonly IHttpContextAccessor _http;
    public AuditSaveChangesInterceptor(IHttpContextAccessor http) => _http = http;

    public override async ValueTask<InterceptionResult<int>> SavingChangesAsync(DbContextEventData eventData,
        InterceptionResult<int> result, CancellationToken cancellationToken = default)
    {
        var ctx = eventData.Context;
        var user = _http.HttpContext?.User?.Identity?.Name ?? "system";
        var entries = ctx.ChangeTracker.Entries().Where(e => e.State == EntityState.Modified
                                                           || e.State == EntityState.Added
                                                           || e.State == EntityState.Deleted);

        foreach (var entry in entries)
        {
            var payload = JsonSerializer.Serialize(entry.CurrentValues.ToObject()); // or build object
            var history = new CustomerHistory
            {
                CustomerId = (Guid)entry.Property("CustomerId").CurrentValue,
                ChangeType = entry.State == EntityState.Added ? "I" : entry.State == EntityState.Deleted ? "D" : "U",
                ChangedBy = user,
                PayloadJson = payload,
                ChangedAt = DateTime.UtcNow,
                TxId = Guid.NewGuid()
            };
            ctx.Set<CustomerHistory>().Add(history);
        }

        return await base.SavingChangesAsync(eventData, result, cancellationToken);
    }
}

Register interceptor in Program.cs for EF Core.

Benefits: you have direct access to user principal and request info.

Query patterns & useful API endpoints

  • GET /api/entity/{id}/history — list versions.
  • GET /api/entity/{id}/history/{historyId} — fetch specific version.
  • GET /api/entity/{id}/diff?from=histA&to=histB — return field-level diff.
  • POST /api/entity/{id}/revert — revert to historyId (authz required).
  • GET /api/entity/{id}/asOf?timestamp=... — time-travel view (temporal tables easy).

Example C# controller methods using Dapper/EF Core — omitted for brevity (pattern same as stored procs).

Field-level diff (practical approach)
Store PayloadJson for each version (full row).

To compute diff, fetch two JSON objects and compare keys; report changed fields, old and new values. Use server code (C#) to parse JSON into Dictionary<string, object> and compare.

Example diff function (C# pseudocode)
Dictionary<string, object> left = JsonSerializer.Deserialize<Dictionary<string, object>>(leftJson);
Dictionary<string, object> right = JsonSerializer.Deserialize<Dictionary<string, object>>(rightJson);
var diffs = new List<Diff>();
foreach(var key in left.Keys.Union(right.Keys))
{
    left.TryGetValue(key, out var lv);
    right.TryGetValue(key, out var rv);
    if (!object.equals(lv, rv))
        diffs.Add(new Diff { Field = key, Old = lv?.ToString(), New = rv?.ToString() });
}

Return diffs to UI.

Retention, compression and archiving

  • Keep history for required retention window (e.g., 2–7 years) per compliance.
  • Archive older history to cheaper storage (Parquet/JSON files in Blob/S3). Provide a process to purge archived rows.
  • Consider compressing PayloadJson (e.g., gzip) if history large. Store as VARBINARY or use table compression features.
  • Avoid storing huge BLOBs repeatedly; store references instead.

Concurrency, transaction and tx-id handling
Use a TxId and TransactionCommittedAt in history to group related row changes into a single logical transaction. For trigger approach, generate TxId from CONTEXT_INFO or SESSION_CONTEXT set by application before DML. In triggers use that TxId instead of NEWID() if you want grouping.

Example in application
EXEC sp_set_session_context @key = 'TxId', @value = '...';

Trigger reads

DECLARE @txid UNIQUEIDENTIFIER = CONVERT(uniqueidentifier, SESSION_CONTEXT(N'TxId'));

This allows you to revert a complete transaction by selecting all history rows with same TxId.

Security & GDPR considerations
Protect personal data: encryption at rest and in transit.

Provide delete/forget workflows: redact PII in history if user requests (but preserve audit trail per law). Consider pseudonymisation.

Audit access to history itself (who viewed history). Log access events separately.

Restrict revert endpoints to authorized roles.

Performance considerations

  • Index history tables on (EntityId, ChangedAt).
  • Partition large history tables by date (Monthly/Yearly).
  • Use compression (ROW/ PAGE) on history partitions.
  • For high write volumes prefer temporal tables which are highly optimised, or use append-only history tables with minimal indexes to speed inserts; create read-optimized projections for reporting.

Testing checklist

  • Verify INSERT / UPDATE / DELETE produce history rows with correct payload.
  • Test user propagation via SESSION_CONTEXT.
  • Test revert and confirm a new history row created.
  • Test time-travel queries for temporal tables.
  • Test diff outputs for correctness.
  • Test retention/archiving and reimport from archive.
  • Load-test writes and history insert rate.

Example migration & scripts (summary)

  • Add history table schema for each entity.
  • Add triggers or enable temporal versioning.
  • Add SyncWatermark table if you need cross-system sync.
  • Add stored procedures to fetch history, revert, cleanup/archive.
  • Add application interceptor to set SESSION_CONTEXT('AppUser') and optional TxId.
  • Build APIs for UI and admin tasks.

Final recommendations

  • If you are using SQL Server 2016+ and you simply require time-travel and row-version history queries:  utilize temporal tables that are system-versioned (easier, performant).
  • Use manual history tables and triggers in conjunction with application-level session context if you require more elaborate revert/grouping, JSON diffs, or richer information (who/why).
  • Combine the two methods: keep a separate AuditMeta table containing user/tx metadata created by an application or trigger to link history rows to metadata, and utilize temporal tables for automatic versioning.
  • Automate monitoring, archiving, and retention.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: Identifying SQL Server "Cannot Initialize Data Source Object" Errors

clock November 11, 2025 08:04 by author Peter

While working with Linked Servers, OPENQUERY, or OPENROWSET in SQL Server, you may encounter one of the most common and frustrating errors:
OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" returned message "Cannot initialize data source object of OLE DB provider".

This error usually occurs when SQL Server is unable to access or initialize the external data source (like Excel, Access, or another SQL Server).

In this guide, we’ll break down:

  • The main causes of this error
  • Step-by-step troubleshooting
  • Common scenarios (Excel, Access, Linked Servers)
  • Configuration & security fixes

Common Scenarios Where the Error Appears

ScenarioExample Code
Querying Excel via OPENROWSET SELECT * FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0', 'Excel 12.0;Database=C:\Data\Sales.xlsx;HDR=YES', 'SELECT * FROM [Sheet1$]');
Accessing Access Database SELECT * FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0', 'C:\Data\Customer.accdb';'admin';'', 'SELECT * FROM Customers');
Using Linked Server SELECT * FROM OPENQUERY(ExcelLink, 'SELECT * FROM [Sheet1$]');

If any of these fail, you’ll often see the “Cannot initialize data source object” error.

Root Causes of the Error

Here are the most common reasons this error occurs:

Step-by-Step Troubleshooting Guide
Step 1: Check File Permissions

  • Locate the file (e.g., C:\Data\Sales.xlsx).
  • Right-click → Properties → Security tab.
  • Ensure the SQL Server service account (like NT SERVICE\MSSQLSERVER or Network Service) has read/write permissions.

If not sure which account SQL uses, run:
SELECT servicename, service_account
FROM sys.dm_server_services;


Step 2: Enable Ad Hoc Distributed Queries
Run the following in SSMS:
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'Ad Hoc Distributed Queries', 1;
RECONFIGURE;


Then re-run your OPENROWSET or OPENDATASOURCE query.

Step 3: Verify OLE DB Provider Installation
Check if the required OLE DB provider is installed:
For Excel/Access → Microsoft.ACE.OLEDB.12.0
For SQL Server-to-SQL Server → SQLNCLI or MSOLEDBSQL


You can verify it using:
EXEC master.dbo.sp_enum_oledb_providers;

Step 4: Check 32-bit vs 64-bit Compatibility
SQL Server (64-bit) requires a 64-bit version of the OLE DB provider.

If you’re running a 32-bit SSMS, but the server uses 64-bit SQL, install both provider versions or run your query via SQL Server Agent Job.

Step 5: Ensure File Is Closed and Accessible
If the Excel file is open by another user or locked for editing, SQL can’t read it.
Close the file and retry.

If it’s on a network path, ensure:
\\ServerName\SharedFolder\File.xlsx

is accessible from the SQL Server machine using the same service account credentials.

Step 6: Test Connection String
Try running this minimal query:
SELECT *
FROM OPENROWSET(
    'Microsoft.ACE.OLEDB.12.0',
    'Excel 12.0;Database=C:\Data\Test.xlsx;HDR=YES',
    'SELECT TOP 5 * FROM [Sheet1$]'
);

If it works with a simple file, the issue is likely your original path or sheet name.

Step 7: Configure Linked Server Options
If using Linked Server for Excel or Access:
EXEC sp_addlinkedserver
    @server='ExcelLink',
    @srvproduct='Excel',
    @provider='Microsoft.ACE.OLEDB.12.0',
    @datasrc='C:\Data\Sales.xlsx',
    @provstr='Excel 12.0;HDR=YES';

EXEC sp_serveroption 'ExcelLink', 'Data Access', TRUE;

Then test:
SELECT * FROM OPENQUERY(ExcelLink, 'SELECT * FROM [Sheet1$]');


Advanced Troubleshooting Tips

  • Reboot the SQL Service after installing new OLE DB drivers.
  • If you’re running on SQL Server Express, ensure it supports Distributed Queries.
  • Avoid UNC paths (\\Server\Folder\File.xlsx) unless the SQL service has domain permissions.

Check Event Viewer logs under Application → MSSQLSERVER for detailed provider errors.

Alternative Approaches

If the problem persists, consider alternatives:

  • Use Import/Export Wizard (in SSMS) instead of OPENROWSET.
  • Use BULK INSERT for CSV data.
  • For Access, use ODBC Linked Tables or .NET Integration in your application layer.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: Developers' Best Practices for SQL Server Security

clock November 3, 2025 06:26 by author Peter

One of the most crucial components of any database system is security. As developers, we frequently overlook the fact that a single unsafe query or open permission can reveal private company information in favor of performance. With examples, explanations, and a straightforward flowchart, let's examine SQL Server Security Best Practices that all developers should adhere to in this post.

The Significance of Security
Critical data, such as user passwords, financial transactions, personal information, etc., is stored by modern applications.
If your SQL Server isn't set up securely, hackers can take advantage of weaknesses like:

  • Injection of SQL
  • Elevation of Privilege
  • Information Leakage
  • Unauthorized Entry
That’s why database-level security must be a shared responsibility between developers and DBAs.


Security Layers in SQL Server
Before jumping into best practices, understand that SQL Server security has multiple layers :

  • Authentication: Who can access the server
  • Authorization: What actions they can perform
  • Encryption: How data is protected in transit and at rest
  • Auditing: Tracking who did what and when

Best Practices for Developers
Let’s break down the most essential security best practices step by step.

1. Use Parameterized Queries (Prevent SQL Injection)

Never concatenate user input directly in your SQL statements.

Vulnerable Example

string query = "SELECT * FROM Users WHERE Username = '" + userInput + "'";

Safe Example
SqlCommand cmd = new SqlCommand("SELECT * FROM Users WHERE Username = @Username", conn);
cmd.Parameters.AddWithValue("@Username", userInput);

Why: Parameterized queries ensure that input is treated as data, not executable SQL, preventing SQL injection attacks.

2. Follow the Principle of Least Privilege
Grant only the permissions required — nothing more.

Don’t

  • Use sa or system admin accounts for applications.
  • Give db_owner role to every user. 

Do

  • Create application-specific users with limited access.
  • Assign roles like db_datareader or db_datawriter as needed.

3. Secure Connection Strings
Never store connection strings in plain text inside your source code.
Use Configuration Files or Secrets Manager:

  • .NET Core: Store in appsettings.json and protect with User Secrets or Azure Key Vault.
  • Windows: Use DPAPI or Encrypted Configuration Sections.

Example
"ConnectionStrings": {
  "DefaultConnection": "Server=myServer;Database=myDB;User Id=appUser;Password=***;"
}


4. Encrypt Sensitive Data
Use SQL Server encryption features to protect confidential data.

Transparent Data Encryption (TDE)

Encrypts the database files (.mdf, .ldf) at rest.
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE MyServerCert;
ALTER DATABASE MyDB SET ENCRYPTION ON;


Column-Level Encryption
Encrypt specific columns like passwords or credit card numbers.

CREATE COLUMN MASTER KEY MyKey
WITH ALGORITHM = RSA_2048;

5. Avoid Hardcoding Credentials
Never hardcode usernames, passwords, or keys in your code.
Always use:

  • Environment variables
  • Secure configuration management
  • Secret stores (e.g., Azure Key Vault, AWS Secrets Manager)

6. Enable Row-Level Security (RLS)
Row-Level Security restricts data visibility based on user or role.

Example
CREATE SECURITY POLICY SalesFilter
ADD FILTER PREDICATE dbo.fnSecurityPredicate(UserID)
ON dbo.Sales WITH (STATE = ON);


This ensures each user can only see data they are authorized to view.

7. Implement Data Masking
Use Dynamic Data Masking to hide sensitive information from unauthorized users.
ALTER TABLE Customers
ALTER COLUMN Email ADD MASKED WITH (FUNCTION = 'email()');

Result
Admin sees full email: [email protected]
Analyst sees masked: [email protected]


8. Regularly Patch and Update SQL Server
Always apply the latest SQL Server Service Packs and Cumulative Updates .
Outdated versions often contain known vulnerabilities that hackers exploit.

9. Use Secure Network Connections (SSL/TLS)
Enable encryption for data in transit.
Force Encryption = Yes


In the connection string:
Encrypt=True;TrustServerCertificate=False;

10. Audit and Monitor Database Activity
Enable SQL Server Audit to track actions such as login attempts, schema changes, or data access.
CREATE SERVER AUDIT Audit_LoginTracking
TO FILE (FILEPATH = 'C:\AuditLogs\')
WITH (ON_FAILURE = CONTINUE);

Then:
ALTER SERVER AUDIT Audit_LoginTracking WITH (STATE = ON);

You can later review logs to identify suspicious activities.

Flowchart: SQL Server Security Flow

Here’s a simple visualization of how SQL Server enforces security at multiple layers:

            ┌─────────────────────────┐
            │   User / Application    │
            └──────────┬──────────────┘
                       │
                       ▼
         ┌────────────────────────┐
         │ Authentication Layer   │
         │ (Login / Password / AD)│
         └─────────────┬──────────┘
                       │
                       ▼
         ┌────────────────────────┐
         │ Authorization Layer    │
         │ (Roles, Permissions)   │
         └─────────────┬──────────┘
                       │
                       ▼
         ┌─────────────────────────┐
         │ Row-Level / Data Access │
         │ (RLS, Masking, Filters) │
         └─────────────┬───────────┘
                       │
                       ▼
         ┌─────────────────────────┐
         │ Encryption Layer        │
         │ (TDE, SSL, Column Key)  │
         └─────────────┬───────────┘
                       │
                       ▼
         ┌─────────────────────────┐
         │ Auditing & Monitoring   │
         │ (Logs, Alerts, Reports) │
         └─────────────────────────┘

This layered approach ensures defense at every step of the data access process.

Final Checklist for Developers

Security AreaBest PracticeExample

Input Handling

Parameterized Queries

@param

Access Control

Least Privilege

Limited Roles

Data Protection

Encryption & Masking

TDE / AES

Secrets Management

No Hardcoded Credentials

Azure Key Vault

Monitoring

SQL Server Audit

Audit Logs


Conclusion

Database security must be incorporated into the application from the start by developers, not only DBAs.

Together, authentication, authorization, encryption, and auditing make up a safe SQL Server configuration.
Recall:
"Security problems can ruin your company, but performance problems can harm your app."

Thus, make SQL Server security a habit rather than a checklist by adhering to these procedures constantly.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: Understanding Complicated SQL Server Queries: Converting Information into Understanding

clock October 29, 2025 08:43 by author Peter

SQL Server continues to be one of the most potent relational database systems in today's data-driven world. However, the ability to write sophisticated SQL queries frequently distinguishes novices from pros. Business reports, analytics dashboards, and performance-driven apps are built on complex queries.

Joins, subqueries, window functions, CTEs, pivoting, and other real-world complex SQL Server queries will all be covered in this article with clear examples.

1. Using Subqueries for Conditional Data Retrieval
Scenario:
You need to find all employees whose salary is higher than the average salary of their department.
Query:
SELECT
    EmpName,
    DepartmentId,
    Salary
FROM Employees E
WHERE Salary > (
    SELECT AVG(Salary)
    FROM Employees
    WHERE DepartmentId = E.DepartmentId
);


Explanation:

  • The inner query calculates the average salary per department.
  • The outer query compares each employee’s salary to that departmental average.

This is a correlated subquery, as it depends on the outer query.

2. Combining Multiple Tables with JOINS
Scenario:
Retrieve all orders along with customer names and product details.
Query:
SELECT
    C.CustomerName,
    O.OrderId,
    P.ProductName,
    O.OrderDate,
    OD.Quantity,
    (OD.Quantity * OD.UnitPrice) AS TotalAmount
FROM Orders O
INNER JOIN Customers C ON O.CustomerId = C.CustomerId
INNER JOIN OrderDetails OD ON O.OrderId = OD.OrderId
INNER JOIN Products P ON OD.ProductId = P.ProductId
WHERE O.OrderDate >= '2025-01-01'
ORDER BY O.OrderDate DESC;


Explanation:
This query combines four tables using inner joins to give a comprehensive view of orders placed in 2025.

3. Ranking Data Using Window Functions
Scenario:
List top 3 highest-paid employees in each department.
Query:
SELECT
    DepartmentId,
    EmpName,
    Salary,
    RANK() OVER (PARTITION BY DepartmentId ORDER BY Salary DESC) AS SalaryRank
FROM Employees
WHERE Salary IS NOT NULL


Then wrap it to filter top 3:
SELECT *
FROM (
    SELECT
        DepartmentId,
        EmpName,
        Salary,
        RANK() OVER (PARTITION BY DepartmentId ORDER BY Salary DESC) AS SalaryRank
    FROM Employees
) AS Ranked
WHERE SalaryRank <= 3;


Explanation:
The RANK() function assigns ranking per department.
The outer query filters the top 3 salaries per department.

4. Simplifying Logic with Common Table Expressions (CTE)
Scenario:
Find employees who earn more than the average salary in their department (using CTE for clarity).
Query:
WITH DeptAverage AS (
    SELECT
        DepartmentId,
        AVG(Salary) AS AvgSalary
    FROM Employees
    GROUP BY DepartmentId
)
SELECT
    E.EmpName,
    E.DepartmentId,
    E.Salary,
    D.AvgSalary
FROM Employees E
JOIN DeptAverage D ON E.DepartmentId = D.DepartmentId
WHERE E.Salary > D.AvgSalary;

Explanation:
The CTE (DeptAverage) simplifies complex nested subqueries, making the query more readable and maintainable.

5. Transforming Rows to Columns with PIVOT
Scenario:
Show total sales by product across different months.
Query:
SELECT *
FROM (
    SELECT
        ProductName,
        DATENAME(MONTH, OrderDate) AS [Month],
        (Quantity * UnitPrice) AS TotalSales
    FROM OrderDetails OD
    JOIN Orders O ON OD.OrderId = O.OrderId
    JOIN Products P ON OD.ProductId = P.ProductId
) AS SourceTable
PIVOT (
    SUM(TotalSales)
    FOR [Month] IN ([January], [February], [March], [April], [May], [June])
) AS PivotTable;

Explanation:
This query pivots monthly sales data into columns, allowing easier visualization and reporting.

6. Recursive Queries with CTE
Scenario:
Retrieve a hierarchical list of employees under a specific manager.
Query:
WITH EmployeeHierarchy AS (
    SELECT
        EmpId, EmpName, ManagerId
    FROM Employees
    WHERE ManagerId IS NULL  -- Top-level managers

    UNION ALL

    SELECT
        E.EmpId, E.EmpName, E.ManagerId
    FROM Employees E
    INNER JOIN EmployeeHierarchy EH ON E.ManagerId = EH.EmpId
)
SELECT * FROM EmployeeHierarchy;


Explanation:
This recursive CTE walks through employee-manager relationships to display a full organizational hierarchy.

Performance Tip
When dealing with large datasets:

  • Use indexes on join and filter columns.
  • Avoid using SELECT *; only fetch required columns.
  • Check execution plans using SQL Server Management Studio (SSMS).
  • Use temp tables or CTEs for readability and modularization.

Conclusion
Complex SQL queries are powerful tools for solving real-world data problems — from business intelligence reports to analytics dashboards. By mastering subqueries, CTEs, window functions, and pivots, you can transform SQL Server from a data store into an insight engine.

Keep practicing on realistic datasets like AdventureWorks or Northwind — and soon, you’ll write elegant and efficient SQL like a pro!

Next Steps:

  • Try rewriting one of your existing reports using a CTE.
  • Use RANK() and PIVOT for analytics dashboards.
  • Explore query optimization using SET STATISTICS IO, TIME ON.


European SQL Server 2022 Hosting :: SQL Server and Other Database Systems' Advanced Authentication Types

clock October 23, 2025 07:52 by author Peter

SQL Authentication
SQL Authentication, also known as SQL Login, stores credentials (username and password) inside the SQL Server instance. It’s a database-managed authentication model, independent of Active Directory or Windows accounts.

Features

  • Uses login name and password stored in SQL Server.
  • Ideal for non-domain clients or cross-platform applications.
  • Supports backward compatibility for legacy systems.

Advantages

  • Works outside of domain networks.
  • Simple to set up for service accounts or third-party tools.
  • Enables easy app-level authentication control.

Disadvantages

  • Passwords must be managed manually.
  • No centralized policy enforcement (e.g., MFA).
  • Slightly higher attack surface if passwords are weak.

Best Practices

  • Use strong password policies and encrypt connections (TLS).
  • Restrict login privileges.
  • Periodically rotate credentials and audit logins.

Windows Authentication
Windows Authentication (also known as Integrated Security or Trusted Connection) uses the Windows or Active Directory identity of the user to authenticate.
It’s the most secure and preferred option for enterprise setups.

Features

  • Uses Kerberos or NTLM protocols for verification.
  • Credentials are not sent directly to SQL Server.
  • Enables Single Sign-On (SSO) via Active Directory.

Advantages

  • Centralized identity control via AD.
  • Strong password and lockout policies automatically applied.
  • Easy role-based access using AD groups.

Disadvantages

  • Requires domain membership or trust relationships.
  • Not ideal for external or Linux-based clients.

Best Practices

  • Prefer Kerberos over NTLM for better security.
  • Use AD groups for permission management.
  • Audit AD memberships regularly.

Other Authentication Methods (Short Notes)

Authentication TypeDescriptionTypical Use
Certificate-Based Authentication Uses X.509 certificates for passwordless access. Secure app-to-app or server-to-server communication.
OAuth 2.0 / OpenID Connect Token-based identity used in web and cloud APIs. SaaS and microservice applications.
Kerberos Delegation Extends Windows auth for multi-tier app scenarios. Linked servers or service accounts.
External Identity Providers (Okta, Ping, etc.) Federated identity integration across platforms. Enterprise SSO environments.

Summary Table

TypeSecurity LevelIdentity SourceIdeal For
SQL Authentication Moderate SQL Server Legacy or non-domain systems
Windows Authentication High Active Directory On-prem enterprise setups
Azure AD Authentication Very High Entra ID Cloud or hybrid systems

HostForLIFEASP.NET SQL Server 2022 Hosting

 

 



European SQL Server 2022 Hosting :: How to Backup Data from Tables in SQL Server (Tables 1 and 2)?

clock October 21, 2025 08:45 by author Peter

In order to generate a data backup from the current tables (Tables 1 and 2),

SELECT INTO and INSERT INTO are two popular SQL techniques that you can employ.

Whether or not backup tables are already in place determines the option.

1. Quick Backup with Table Creation using SELECT INTO
Using this method, data from the source table is directly copied into a new table.

SELECT * INTO table1_Backup FROM table1;
SELECT * INTO table2_Backup FROM table2;
  • Note: This will fail if table1_Backup or table2_Backup already exist.
  • Ideal for one-time backups or quick cloning.

 2. Using INSERT INTO – Backup into Existing Tables

Use this method when the backup tables are already created (i.e., the schema is predefined).

INSERT INTO table1_Backup SELECT * FROM table1;
INSERT INTO table2_Backup SELECT * FROM table2;
  • No error if backup tables already exist.
  • Useful for regular/daily backups where the structure is fixed.

Summary

Method Creates Backup Table? Use Case
SELECT INTO Yes First-time backup / fast duplication
INSERT INTO No (table must exist) Repeated backups / controlled schema

HostForLIFEASP.NET SQL Server 2022 Hosting

 



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in