European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

European SQL Server 2022 Hosting :: How to Lower Lock Contention in Financial Ledger Tables in SQL Server?

clock December 5, 2025 07:34 by author Peter

Financial ledger tables are some of the most heavily accessed structures in enterprise systems. They store critical accounting events such as journal entries, postings, adjustments, settlements, accruals, and reversals. These tables often experience very high write concurrency, especially in:

  • ERP posting engines
  • High-volume billing systems
  • Payment gateways
  • Reconciliation services
  • End-of-day and batch processing jobs

As concurrency increases, SQL Server frequently faces lock contention, leading to:

  • Long-running transactions
  • Deadlocks
  • Slow inserts/updates
  • Users experiencing blocking
  • Increased TempDB usage
  • Timeouts in APIs or stored procedures

This article explains why ledger tables suffer from lock contention, and provides practical, implementable techniques to reduce blocking, improve throughput, and increase system stability.

Why Financial Ledger Tables Face High Lock Contention?

1. Constant Writes
Ledger tables often record every financial event.
Even medium systems may generate:

  • 50–200 records per user action
  • Thousands of rows per batch job
  • Millions of rows per day

Continuous writes cause frequent locks.

2. Wide Rows With Many Columns
Ledger rows often contain:

  • CompanyId
  • AccountId
  • Debit/Credit
  • Amount
  • Currency
  • ReferenceId
  • Audit columns

Wide rows increase the cost of locking and reduce throughput.

3. Heavy Use Of Aggregates
Financial reporting queries run concurrently with postings:
SELECT AccountId, SUM(Debit) - SUM(Credit)
FROM Ledger
WHERE AccountingPeriod = '2025-01'
GROUP BY AccountId;


Such queries escalate to locking large ranges of rows.

4. Range Locks Due To Non-Optimized Indexing

If AccountId or PostingDate is not properly indexed, SQL Server uses:

  • Table scans
  • Key-range locks
  • Intent locks

These escalate quickly under concurrency.

5. Long Transactions
Posting operations often perform multiple validations and write across several tables.

  • Long transactions hold locks longer, increasing blocking.
  • Lock Contention Symptoms In Ledger Tables

Common Issues

  • Blocking chains
  • Deadlocks involving ledger rows
  • LCK_M_S, LCK_M_U, LCK_M_X waits
  • WRITELOG and PAGELATCH waits
  • API timeouts during accounting operations
  • Transaction log growth due to long-running transactions

Typical Indicators
SELECT * FROM sys.dm_tran_locks
WHERE resource_associated_entity_id = OBJECT_ID('Ledger');

SELECT * FROM sys.dm_os_waiting_tasks
WHERE wait_type LIKE 'LCK%';

High frequency of these waits confirms contention.

Technique 1: Use Appropriate Isolation Levels
Default Behavior
Most systems run in READ COMMITTED, causing shared locks during reads.

Recommended
Enable READ_COMMITTED_SNAPSHOT (RCSI):
ALTER DATABASE FinanceDB SET READ_COMMITTED_SNAPSHOT ON;

Benefits:

  • Readers do not block writers
  • Writers do not block readers
  • Ledger reporting queries become non-blocking

This single change can reduce contention by 70–90%.

When To Use Snapshot Isolation?

If high consistency is needed for accounting operations:
ALTER DATABASE FinanceDB SET ALLOW_SNAPSHOT_ISOLATION ON;

Then in code:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;

Use it for:

  • Posting engines
  • Settlement jobs
  • Reconciliation processes

Technique 2: Narrow The Update Scope

Problem
Some ledger updates touch many rows at once:
UPDATE Ledger
SET IsReconciled = 1
WHERE AccountId = @Acct AND PostingDate = @Date;

This creates:

  • Key-range locks
  • Long blocking spans
  • Update lock escalations

Solution
Break the update into small batches:
WHILE 1=1
BEGIN
    WITH cte AS (
        SELECT TOP (500) LedgerId
        FROM Ledger
        WHERE AccountId = @Acct
          AND PostingDate = @Date
          AND IsReconciled = 0
    )
    UPDATE cte SET IsReconciled = 1;

    IF @@ROWCOUNT < 500 BREAK;
END


Benefits:

  • Smaller locks
  • Faster release of resources
  • Lower deadlock probability

Technique 3: Use Optimized Indexing To Reduce Range Locks
Bad Indexing Pattern
Ledger tables often have an index like:
CREATE INDEX IX_Ledger_AccountId ON Ledger(AccountId);

Missing PostingDate or JournalEntryId forces range locks.

Recommended Covering Index
CREATE INDEX IX_Ledger_Account_Date
ON Ledger(AccountId, PostingDate)
INCLUDE (Debit, Credit, Amount);


Why this helps:

  • Converts table scans into seeks
  • Reduces lock footprints
  • Prevents range locks from escalating

Partitioning Bonus
If table is huge (hundreds of millions):

Partition by PostingDate:
PARTITION BY RANGE (PostingDate)
This physically separates ledger rows, reducing lock footprint even further.

Technique 4: Use Insert-Only Patterns (Immutable Ledger Design)
The best ledger models treat entries as immutable:

  • No UPDATE
  • No DELETE
  • Only INSERT

Benefits:

  • No update locks
  • No deadlocks on row modifications
  • Append-only architecture scales extremely well

Example: Instead Of Updating Balances

Bad:
UPDATE AccountBalance
SET ClosingBalance = ClosingBalance + @Amt
WHERE AccountId = @Acct;


Good:
INSERT INTO AccountBalanceHistory(AccountId, AmountChange, Timestamp)
VALUES (@Acct, @Amt, GETUTCDATE());


Then compute balances in reporting layer or via materialized views.

This pattern removes 90% of lock contention.

Technique 5: Reduce Transaction Length
Problem
Posting operations often run like this:
BEGIN TRAN;

-- validations
-- external API call
-- logging
-- balance adjustment
-- final insert

COMMIT;

Long-running external calls hold locks for seconds.

Solution
Move non-critical logic outside the transaction:
BEGIN TRAN;

-- minimal required operations
INSERT INTO Ledger(...)
INSERT INTO Journal(...)

COMMIT;

-- post-commit logging
-- send notification
-- update cache
-- call external APIs


This shortens lock durations drastically.

Technique 6: Use Row-Versioning To Reduce Write Contention
Add a RowVersion column

ALTER TABLE Ledger
ADD RowVer ROWVERSION;

Use optimistic concurrency:

UPDATE Ledger
SET Amount = @Amt
WHERE LedgerId = @Id AND RowVer = @OldVersion;


If the row changed, SQL Server returns 0 rows.
Client retries instead of blocking.

Technique 7: Move Operational Queries To Replica
If you use Availability Groups:
Primary = Accept writes
Secondary replicas = Handle reads


Reporting queries such as:
SELECT * FROM Ledger WHERE PostingDate BETWEEN ...

should be routed to replicas, not the primary.

This removes all read locks on the primary ledger.

Technique 8: Use Scalable Posting Patterns
Pattern 1: FIFO Posting Queue
Producer pushes journal requests to a queue:

  • Service Bus
  • Kafka
  • SQL Queue table

Consumer processes in controlled batches, reducing contention.

Pattern 2: Micro-batches
Process ledger writes in slices:
BatchSize = 1000

Best for high-traffic financial systems.

Pattern 3: Sharded Ledger Tables

Split ledger by:

  • Company
  • Account range
  • Region

This allows parallelism and reduces hot rows.

End-To-End Example: Optimizing Journal Posting

Original (High Contention)

BEGIN TRAN

INSERT INTO Ledger(AccountId, Debit, Credit, PostingDate)
VALUES (@Account, @Debit, @Credit, @Date)

UPDATE AccountBalance
SET Balance = Balance + @NetAmount
WHERE AccountId = @Account

COMMIT


Optimized

  • Use RCSI
  • Insert-only pattern
  • Batch account balance updates

BEGIN TRAN

INSERT INTO Ledger(AccountId, Debit, Credit, PostingDate)
VALUES (@Account, @Debit, @Credit, @Date)

INSERT INTO BalanceDelta(AccountId, AmountChange)
VALUES (@Account, @NetAmount)

COMMIT


A background worker aggregates BalanceDelta periodically:
UPDATE AccountBalance
SET Balance = Balance + d.AmountChange
FROM BalanceDelta d
WHERE AccountBalance.AccountId = d.AccountId

DELETE BalanceDelta WHERE Processed = 1;


This eliminates hot rows and reduces contention dramatically.
Diagram: Ledger Lock Contention vs Optimized Architecture

Before Optimization

Client Requests → Ledger Table (Hotspot)
                  ↑       ↓
                Updates  Aggregates
                  ↑       ↓
               Blocking / Deadlocks


After Optimization
Client → Ledger (Insert-Only) → Processed Quickly
Client → BalanceDelta → Background Worker → AccountBalance (Low Contention)
Reports → Secondary Replica (No Locks On Primary)


This architecture is used by modern ERP and billing systems.

Conclusion

Financial ledger tables face extreme lock contention because of:

  • High write concurrency
  • Frequent aggregates
  • Wide rows
  • Bad indexing
  • Long transactions

By applying the techniques in this guide, RCSI, batching, optimized indexing, insert-only design, optimistic concurrency, posting queues, and replicas, you can reduce contention by 80–95%, enabling smooth and scalable financial operations.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting - HostForLIFE :: Building a Table for the SQL Rule Engine

clock November 24, 2025 07:37 by author Peter

Instead of using hard-coded processes, you can build validation logic as data using a SQL based Rule Engine.  In ERP, CRM, WMS, finance, banking, healthcare, and dynamic form engines where business rules are constantly changing yet deployments are costly or time-consuming this is quite helpful.  This article describes how to create a dynamic rule engine that is fully stored in SQL using:

  • Rule definition tables
  • Condition groups
  • Dynamic SQL evaluation
  • Parameter injection
  • Runtime orchestration
  • Exception logging
  • Flowcharts + architecture diagrams

Core Problem
Business logic changes often:

  • “If Quantity > Stock → throw error”
  • “If Customer.CreditLimit < Order.Total → fail”
  • “If Item.Category = ‘Hazardous’ → require special approval”
  • “If Vendor is blocked → stop PO creation”

Traditionally:
Developers change code → deploy → test → release.

Better approach:
Store rules in tables → interpret them dynamically → no deployments needed.

Rule Engine Table Design
Below is a battle-tested schema used in real enterprise systems.

Rule Table (Rule Master)
Stores the rule metadata.

CREATE TABLE RuleMaster (
    RuleId INT IDENTITY PRIMARY KEY,
    RuleName VARCHAR(200),
    EntityName VARCHAR(200),         -- “SalesOrder”, “Stockline”, etc.
    IsActive BIT DEFAULT 1,
    Severity VARCHAR(30),            -- "Error", "Warning", "Info"
    ErrorMessage VARCHAR(500),
    CreatedDate DATETIME2 DEFAULT SYSDATETIME()
);

Examples:
“SO_Total_Validation”
“Stock_Availability_Check”
“PO_Block_Vendor_Validation”


Rule Condition Groups
Groups allow AND/OR nesting.

CREATE TABLE RuleConditionGroup (
    GroupId INT IDENTITY PRIMARY KEY,
    RuleId INT FOREIGN KEY REFERENCES RuleMaster(RuleId),
    GroupOperator VARCHAR(5) CHECK (GroupOperator IN ('AND','OR'))
);

Example:
Group 1: Quantity > 0 AND Price > 0
Group 2: Customer.CreditLimit < Order.Total

Rule Conditions
Each group contains conditions stored as rows, not code.

CREATE TABLE RuleCondition (
    ConditionId INT IDENTITY PRIMARY KEY,
    GroupId INT FOREIGN KEY REFERENCES RuleConditionGroup(GroupId),
    LeftOperand VARCHAR(200),        -- Field name, JSON path, SQL expression
    Operator VARCHAR(10),            -- =, <>, >, <, >=, LIKE, IN, BETWEEN, etc.
    RightOperand VARCHAR(200),       -- Literal value or field-to-field
    DataType VARCHAR(20),            -- Int, Decimal, Date, NVarchar
);

Example rows:

LeftOperandOperatorRightOperandDataType

OrderTotal

>

CreditLimit

Decimal

Status

<>

'Closed'

NVARCHAR

Quantity

<=

StockQty

Int

Rule Orchestration Table (Optional but Recommended)

Allows mapping rules to events:
CREATE TABLE RuleEventMapping (
    Id INT IDENTITY PRIMARY KEY,
    RuleId INT FOREIGN KEY REFERENCES RuleMaster(RuleId),
    EventName VARCHAR(200)           -- "OnCreate", "OnUpdate", "OnSubmit"
);

Dynamic SQL Generator (Core Engine)
Below is a simplified stored procedure that:

Loads rules

  • Loops through condition groups
  • Builds dynamic SQL
  • Evaluates TRUE/FALSE
  • Returns message

Stored Procedure Skeleton
CREATE PROCEDURE ExecuteRuleEngine
@EntityName VARCHAR(200),
@EventName VARCHAR(100),
@JsonInput NVARCHAR(MAX)        -- entity payload from APIASBEGIN
SET NOCOUNT ON;

DECLARE @Errors TABLE (RuleId INT, Message VARCHAR(500));

SELECT R.RuleId, R.RuleName, R.Severity, R.ErrorMessage
INTO #Rules
FROM RuleMaster R
JOIN RuleEventMapping M ON R.RuleId = M.RuleId
WHERE R.EntityName = @EntityName
  AND M.EventName = @EventName
  AND R.IsActive = 1;

DECLARE @RuleId INT;

DECLARE rule_cursor CURSOR FOR
    SELECT RuleId FROM #Rules;

OPEN rule_cursor;
FETCH NEXT FROM rule_cursor INTO @RuleId;

WHILE @@FETCH_STATUS = 0
BEGIN
    IF dbo.EvaluateRule(@RuleId, @JsonInput) = 0
    BEGIN
        INSERT INTO @Errors
        SELECT RuleId, ErrorMessage FROM #Rules WHERE RuleId = @RuleId;
    END

    FETCH NEXT FROM rule_cursor INTO @RuleId;
END

CLOSE rule_cursor;
DEALLOCATE rule_cursor;

SELECT * FROM @Errors;
END


Core Function: EvaluateRule()
This dynamically evaluates a rule using all its conditions.
CREATE FUNCTION EvaluateRule
(
@RuleId INT,
@JsonInput NVARCHAR(MAX)
)
RETURNS BIT
ASBEGIN
DECLARE @SQL NVARCHAR(MAX) = '';
DECLARE @Result BIT = 1;

SELECT @SQL = STRING_AGG(
    '(' +
    CASE DataType
        WHEN 'Int' THEN 'CAST(JSON_VALUE(@JsonInput, ''$.' + LeftOperand + ''') AS INT)'
        WHEN 'Decimal' THEN 'CAST(JSON_VALUE(@JsonInput, ''$.' + LeftOperand + ''') AS DECIMAL(18,2))'
        WHEN 'Date' THEN 'CAST(JSON_VALUE(@JsonInput, ''$.' + LeftOperand + ''') AS DATETIME2)'
        ELSE 'JSON_VALUE(@JsonInput, ''$.' + LeftOperand + ''')'
    END
    + ' ' + Operator + ' ' +
    CASE
        WHEN LEFT(RightOperand,1)='@' THEN 'JSON_VALUE(@JsonInput, ''$.' + SUBSTRING(RightOperand,2,200) + ''')'
        ELSE QUOTENAME(RightOperand,'''')
    END + ')',
    ' AND '
)
FROM RuleCondition RC
JOIN RuleConditionGroup G ON RC.GroupId = G.GroupId
WHERE G.RuleId = @RuleId;

DECLARE @FinalSQL NVARCHAR(MAX) = 'SELECT CASE WHEN ' + @SQL + ' THEN 1 ELSE 0 END';

EXEC sp_executesql @FinalSQL, N'@JsonInput NVARCHAR(MAX)', @JsonInput=@JsonInput OUTPUT;

RETURN @Result;
END


This function:

  • Converts JSON data into SQL values
  • Generates expressions like:

(CAST(JSON_VALUE(@json,'$.Quantity') AS INT) > CAST(JSON_VALUE(@json,'$.StockQty') AS INT))
AND
(JSON_VALUE(@json,'$.Status') <> 'Closed')

Evaluates the formula
Returns 1 or 0

Example: Real Rule Engine in Action
Rule

If OrderTotal > CreditLimit OR Customer is blocked → fail.

Conditions stored in database

GroupLeftOperatorRightDataType

1

OrderTotal

>

CreditLimit

Decimal

1

IsBlocked

=

1

Int

Generated SQL
SELECT CASE WHEN
(
  CAST(JSON_VALUE(@json,'$.OrderTotal') AS DECIMAL(18,2)) >CAST(JSON_VALUE(@json,'$.CreditLimit') AS DECIMAL(18,2))
)
OR
(
  CAST(JSON_VALUE(@json,'$.IsBlocked') AS INT) = 1
)
THEN 0 ELSE 1 END


Logging & Monitoring
Use:
CREATE TABLE RuleEngineLog (
RunId BIGINT IDENTITY PRIMARY KEY,
RuleId INT,
EntityName VARCHAR(200),
InputPayload NVARCHAR(MAX),
Result BIT,
Message VARCHAR(500),
CreatedDate DATETIME2 DEFAULT SYSDATETIME()
);


Best Practices
Do

  • Pre-validate JSON schema using ISJSON
  • Cache rule metadata
  • Avoid string concatenation inside cursor
  • Create reusable SQL UDFs (e.g., CompareValues)
  • Add unit tests for each rule

Avoid

  • Very large nested OR conditions
  • Using dynamic SQL inside loops without batching
  • Storing complex formulas directly in text fields

Final Notes
This SQL Rule Engine design is:

  • Flexible
  • Enterprise-grade
  • Deployment-free
  • Extensible
  • Highly maintainable

Perfect for ERP, WMS, Finance, Insurance, Enterprise SaaS

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: Creating a Data Versioning System that stores each record's history (SQL Server +.NET)

clock November 19, 2025 07:18 by author Peter

A data versioning system records every modification made to your information, including who made what changes, when they were made, and how to audit or restore previous versions. Audit, debugging, GDPR and financial compliance, and capabilities like "time travel," record revert, and change comparisons all depend on this.

With several implementation alternatives (built-in temporal tables, manual historical tables with triggers, and application-level capture), ER diagrams, flowcharts, SQL scripts, ASP.NET Core patterns, and best practices, this paper provides a workable, production-ready design.

Two main approaches (summary)

  • System-versioned temporal tables (SQL Server feature, simple, performant, automatic).
    • Pros: automatic row-versioning, built-in time travel query, efficient.
    • Cons: less flexible metadata (who/why), harder to store JSON diffs, needs SQL Server 2016+.
  • Manual history tables + triggers / stored procedures.
    • Pros: full control (store user, reason, JSON diffs, tx ids), easier to extend.
    • Cons: more code, need to manage triggers and retention.

Additionally, application-level capture (EF Core interceptors, change tracker) can supplement to include user/context info and business logic.

Option A: System-versioned temporal table (recommended when available)
1. Create temporal table

CREATE TABLE dbo.Customer
(
    CustomerId   UNIQUEIDENTIFIER NOT NULL PRIMARY KEY DEFAULT NEWID(),
    Name         NVARCHAR(200) NOT NULL,
    Email        NVARCHAR(200),
    Balance      DECIMAL(18,2) DEFAULT 0,
    SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL,
    SysEndTime   DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL,
    PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime)
)
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.CustomerHistory));

SQL Server creates dbo.CustomerHistory automatically with the row versions. But CustomerHistory does not include ChangedBy or ChangeReason.

2. Add audit metadata (who/why)
You can store user info in a separate audit table or extend approach by writing triggers to insert augmented history or use an application-level write to an audit table.
Example: keep CustomerHistoryMeta where you store (HistoryRowPK, ChangedBy, ChangeReason, TxId) linked to history rows using SysStartTime and CustomerId as keys.

3. Query history (time travel)
-- Get record as of a point in time
SELECT *
FROM dbo.Customer
FOR SYSTEM_TIME AS OF '2025-11-01 10:00:00'
WHERE CustomerId = '...';

-- Get all versions
SELECT * FROM dbo.Customer
FOR SYSTEM_TIME ALL
WHERE CustomerId = '...'
ORDER BY SysStartTime;


4. Revert to historical version (pattern)
To revert, read the historical row and insert a new current row (or update current row) — do not “restore” history row directly; write a new change so revert is also tracked.

Option B: Manual history table + trigger (flexible, full metadata)
1. Schema (example for table Customer)
CREATE TABLE dbo.Customer
(
    CustomerId UNIQUEIDENTIFIER PRIMARY KEY DEFAULT NEWID(),
    Name NVARCHAR(200),
    Email NVARCHAR(200),
    Balance DECIMAL(18,2)
);

CREATE TABLE dbo.CustomerHistory
(
    HistoryId BIGINT IDENTITY PRIMARY KEY,
    CustomerId UNIQUEIDENTIFIER NOT NULL,
    ChangeType CHAR(1) NOT NULL, -- I/U/D
    ChangedBy NVARCHAR(200) NULL,
    ChangeReason NVARCHAR(500) NULL,
    ChangedAt DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
    PayloadJson NVARCHAR(MAX) NOT NULL, -- full row snapshot as JSON
    TxId UNIQUEIDENTIFIER NOT NULL DEFAULT NEWID()
);

CREATE INDEX IX_CustomerHistory_CustomerId ON dbo.CustomerHistory(CustomerId);
CREATE INDEX IX_CustomerHistory_TxId ON dbo.CustomerHistory(TxId);


2. Trigger to capture changes
CREATE TRIGGER trg_Customer_Audit
ON dbo.Customer
AFTER INSERT, UPDATE, DELETE
AS
BEGIN
    SET NOCOUNT ON;

    DECLARE @txid UNIQUEIDENTIFIER = NEWID();
    DECLARE @changedBy NVARCHAR(200) = SUSER_SNAME(); -- replace via CONTEXT_INFO if app sets

    -- INSERTED rows -> Insert
    INSERT INTO dbo.CustomerHistory (CustomerId, ChangeType, ChangedBy, ChangeReason, PayloadJson, TxId)
    SELECT i.CustomerId, 'I', @changedBy, NULL, (SELECT i.* FOR JSON PATH, WITHOUT_ARRAY_WRAPPER), @txid
    FROM inserted i;

    -- UPDATED rows -> Update (capture new snapshot or both old & new as needed)
    INSERT INTO dbo.CustomerHistory (CustomerId, ChangeType, ChangedBy, ChangeReason, PayloadJson, TxId)
    SELECT u.CustomerId, 'U', @changedBy, NULL, (SELECT u.* FOR JSON PATH, WITHOUT_ARRAY_WRAPPER), @txid
    FROM inserted u
    WHERE EXISTS (SELECT 1 FROM deleted d WHERE d.CustomerId = u.CustomerId);

    -- DELETED rows -> Delete
    INSERT INTO dbo.CustomerHistory (CustomerId, ChangeType, ChangedBy, ChangeReason, PayloadJson, TxId)
    SELECT d.CustomerId, 'D', @changedBy, NULL, (SELECT d.* FOR JSON PATH, WITHOUT_ARRAY_WRAPPER), @txid
    FROM deleted d;
END
Notes


SUSER_SNAME() gives SQL login; for application user prefer setting SESSION_CONTEXT or CONTEXT_INFO before DML and reading it in trigger (see below).

PayloadJson stores full state snapshot; you may instead store only changed columns (diff) if storage is a concern.

3. Passing application user into trigger
Before executing DML, set session context from application:
EXEC sp_set_session_context 'AppUser', '[email protected]';
Then inside trigger:

DECLARE @changedBy NVARCHAR(200) = CONVERT(NVARCHAR(200), SESSION_CONTEXT(N'AppUser'));

This makes ChangedBy accurate.

4. Stored procedures to query history
CREATE PROCEDURE usp_GetCustomerHistory
    @CustomerId UNIQUEIDENTIFIER
AS
BEGIN
    SELECT HistoryId, ChangeType, ChangedBy, ChangeReason, ChangedAt, PayloadJson
    FROM dbo.CustomerHistory
    WHERE CustomerId = @CustomerId
    ORDER BY ChangedAt DESC;
END


5. Revert procedure (create new record state from history)
CREATE PROCEDURE usp_RevertCustomerToHistory
    @HistoryId BIGINT,
    @RevertedBy NVARCHAR(200)
AS
BEGIN
    DECLARE @payload NVARCHAR(MAX);
    SELECT @payload = PayloadJson FROM dbo.CustomerHistory WHERE HistoryId = @HistoryId;

    -- parse JSON into columns and update current table
    UPDATE dbo.Customer
    SET Name = JSON_VALUE(@payload, '$.Name'),
        Email = JSON_VALUE(@payload, '$.Email'),
        Balance = TRY_CAST(JSON_VALUE(@payload, '$.Balance') AS DECIMAL(18,2))
    WHERE CustomerId = JSON_VALUE(@payload, '$.CustomerId');

    -- insert a history record marking revert
    INSERT INTO dbo.CustomerHistory (CustomerId, ChangeType, ChangedBy, ChangeReason, PayloadJson, TxId)
    VALUES (JSON_VALUE(@payload, '$.CustomerId'), 'U', @RevertedBy, 'Revert to HistoryId ' + CAST(@HistoryId AS NVARCHAR(20)), @payload, NEWID());
END


Application-level capture (EF Core interceptor) - add user / reason
If you use EF Core, intercept SaveChanges to write audit to history table so you have full contextual data (user id, IP, reason).

Example (simplified)
public class AuditSaveChangesInterceptor : SaveChangesInterceptor
{
    private readonly IHttpContextAccessor _http;
    public AuditSaveChangesInterceptor(IHttpContextAccessor http) => _http = http;

    public override async ValueTask<InterceptionResult<int>> SavingChangesAsync(DbContextEventData eventData,
        InterceptionResult<int> result, CancellationToken cancellationToken = default)
    {
        var ctx = eventData.Context;
        var user = _http.HttpContext?.User?.Identity?.Name ?? "system";
        var entries = ctx.ChangeTracker.Entries().Where(e => e.State == EntityState.Modified
                                                           || e.State == EntityState.Added
                                                           || e.State == EntityState.Deleted);

        foreach (var entry in entries)
        {
            var payload = JsonSerializer.Serialize(entry.CurrentValues.ToObject()); // or build object
            var history = new CustomerHistory
            {
                CustomerId = (Guid)entry.Property("CustomerId").CurrentValue,
                ChangeType = entry.State == EntityState.Added ? "I" : entry.State == EntityState.Deleted ? "D" : "U",
                ChangedBy = user,
                PayloadJson = payload,
                ChangedAt = DateTime.UtcNow,
                TxId = Guid.NewGuid()
            };
            ctx.Set<CustomerHistory>().Add(history);
        }

        return await base.SavingChangesAsync(eventData, result, cancellationToken);
    }
}

Register interceptor in Program.cs for EF Core.

Benefits: you have direct access to user principal and request info.

Query patterns & useful API endpoints

  • GET /api/entity/{id}/history — list versions.
  • GET /api/entity/{id}/history/{historyId} — fetch specific version.
  • GET /api/entity/{id}/diff?from=histA&to=histB — return field-level diff.
  • POST /api/entity/{id}/revert — revert to historyId (authz required).
  • GET /api/entity/{id}/asOf?timestamp=... — time-travel view (temporal tables easy).

Example C# controller methods using Dapper/EF Core — omitted for brevity (pattern same as stored procs).

Field-level diff (practical approach)
Store PayloadJson for each version (full row).

To compute diff, fetch two JSON objects and compare keys; report changed fields, old and new values. Use server code (C#) to parse JSON into Dictionary<string, object> and compare.

Example diff function (C# pseudocode)
Dictionary<string, object> left = JsonSerializer.Deserialize<Dictionary<string, object>>(leftJson);
Dictionary<string, object> right = JsonSerializer.Deserialize<Dictionary<string, object>>(rightJson);
var diffs = new List<Diff>();
foreach(var key in left.Keys.Union(right.Keys))
{
    left.TryGetValue(key, out var lv);
    right.TryGetValue(key, out var rv);
    if (!object.equals(lv, rv))
        diffs.Add(new Diff { Field = key, Old = lv?.ToString(), New = rv?.ToString() });
}

Return diffs to UI.

Retention, compression and archiving

  • Keep history for required retention window (e.g., 2–7 years) per compliance.
  • Archive older history to cheaper storage (Parquet/JSON files in Blob/S3). Provide a process to purge archived rows.
  • Consider compressing PayloadJson (e.g., gzip) if history large. Store as VARBINARY or use table compression features.
  • Avoid storing huge BLOBs repeatedly; store references instead.

Concurrency, transaction and tx-id handling
Use a TxId and TransactionCommittedAt in history to group related row changes into a single logical transaction. For trigger approach, generate TxId from CONTEXT_INFO or SESSION_CONTEXT set by application before DML. In triggers use that TxId instead of NEWID() if you want grouping.

Example in application
EXEC sp_set_session_context @key = 'TxId', @value = '...';

Trigger reads

DECLARE @txid UNIQUEIDENTIFIER = CONVERT(uniqueidentifier, SESSION_CONTEXT(N'TxId'));

This allows you to revert a complete transaction by selecting all history rows with same TxId.

Security & GDPR considerations
Protect personal data: encryption at rest and in transit.

Provide delete/forget workflows: redact PII in history if user requests (but preserve audit trail per law). Consider pseudonymisation.

Audit access to history itself (who viewed history). Log access events separately.

Restrict revert endpoints to authorized roles.

Performance considerations

  • Index history tables on (EntityId, ChangedAt).
  • Partition large history tables by date (Monthly/Yearly).
  • Use compression (ROW/ PAGE) on history partitions.
  • For high write volumes prefer temporal tables which are highly optimised, or use append-only history tables with minimal indexes to speed inserts; create read-optimized projections for reporting.

Testing checklist

  • Verify INSERT / UPDATE / DELETE produce history rows with correct payload.
  • Test user propagation via SESSION_CONTEXT.
  • Test revert and confirm a new history row created.
  • Test time-travel queries for temporal tables.
  • Test diff outputs for correctness.
  • Test retention/archiving and reimport from archive.
  • Load-test writes and history insert rate.

Example migration & scripts (summary)

  • Add history table schema for each entity.
  • Add triggers or enable temporal versioning.
  • Add SyncWatermark table if you need cross-system sync.
  • Add stored procedures to fetch history, revert, cleanup/archive.
  • Add application interceptor to set SESSION_CONTEXT('AppUser') and optional TxId.
  • Build APIs for UI and admin tasks.

Final recommendations

  • If you are using SQL Server 2016+ and you simply require time-travel and row-version history queries:  utilize temporal tables that are system-versioned (easier, performant).
  • Use manual history tables and triggers in conjunction with application-level session context if you require more elaborate revert/grouping, JSON diffs, or richer information (who/why).
  • Combine the two methods: keep a separate AuditMeta table containing user/tx metadata created by an application or trigger to link history rows to metadata, and utilize temporal tables for automatic versioning.
  • Automate monitoring, archiving, and retention.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: Identifying SQL Server "Cannot Initialize Data Source Object" Errors

clock November 11, 2025 08:04 by author Peter

While working with Linked Servers, OPENQUERY, or OPENROWSET in SQL Server, you may encounter one of the most common and frustrating errors:
OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" returned message "Cannot initialize data source object of OLE DB provider".

This error usually occurs when SQL Server is unable to access or initialize the external data source (like Excel, Access, or another SQL Server).

In this guide, we’ll break down:

  • The main causes of this error
  • Step-by-step troubleshooting
  • Common scenarios (Excel, Access, Linked Servers)
  • Configuration & security fixes

Common Scenarios Where the Error Appears

ScenarioExample Code
Querying Excel via OPENROWSET SELECT * FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0', 'Excel 12.0;Database=C:\Data\Sales.xlsx;HDR=YES', 'SELECT * FROM [Sheet1$]');
Accessing Access Database SELECT * FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0', 'C:\Data\Customer.accdb';'admin';'', 'SELECT * FROM Customers');
Using Linked Server SELECT * FROM OPENQUERY(ExcelLink, 'SELECT * FROM [Sheet1$]');

If any of these fail, you’ll often see the “Cannot initialize data source object” error.

Root Causes of the Error

Here are the most common reasons this error occurs:

Step-by-Step Troubleshooting Guide
Step 1: Check File Permissions

  • Locate the file (e.g., C:\Data\Sales.xlsx).
  • Right-click → Properties → Security tab.
  • Ensure the SQL Server service account (like NT SERVICE\MSSQLSERVER or Network Service) has read/write permissions.

If not sure which account SQL uses, run:
SELECT servicename, service_account
FROM sys.dm_server_services;


Step 2: Enable Ad Hoc Distributed Queries
Run the following in SSMS:
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'Ad Hoc Distributed Queries', 1;
RECONFIGURE;


Then re-run your OPENROWSET or OPENDATASOURCE query.

Step 3: Verify OLE DB Provider Installation
Check if the required OLE DB provider is installed:
For Excel/Access → Microsoft.ACE.OLEDB.12.0
For SQL Server-to-SQL Server → SQLNCLI or MSOLEDBSQL


You can verify it using:
EXEC master.dbo.sp_enum_oledb_providers;

Step 4: Check 32-bit vs 64-bit Compatibility
SQL Server (64-bit) requires a 64-bit version of the OLE DB provider.

If you’re running a 32-bit SSMS, but the server uses 64-bit SQL, install both provider versions or run your query via SQL Server Agent Job.

Step 5: Ensure File Is Closed and Accessible
If the Excel file is open by another user or locked for editing, SQL can’t read it.
Close the file and retry.

If it’s on a network path, ensure:
\\ServerName\SharedFolder\File.xlsx

is accessible from the SQL Server machine using the same service account credentials.

Step 6: Test Connection String
Try running this minimal query:
SELECT *
FROM OPENROWSET(
    'Microsoft.ACE.OLEDB.12.0',
    'Excel 12.0;Database=C:\Data\Test.xlsx;HDR=YES',
    'SELECT TOP 5 * FROM [Sheet1$]'
);

If it works with a simple file, the issue is likely your original path or sheet name.

Step 7: Configure Linked Server Options
If using Linked Server for Excel or Access:
EXEC sp_addlinkedserver
    @server='ExcelLink',
    @srvproduct='Excel',
    @provider='Microsoft.ACE.OLEDB.12.0',
    @datasrc='C:\Data\Sales.xlsx',
    @provstr='Excel 12.0;HDR=YES';

EXEC sp_serveroption 'ExcelLink', 'Data Access', TRUE;

Then test:
SELECT * FROM OPENQUERY(ExcelLink, 'SELECT * FROM [Sheet1$]');


Advanced Troubleshooting Tips

  • Reboot the SQL Service after installing new OLE DB drivers.
  • If you’re running on SQL Server Express, ensure it supports Distributed Queries.
  • Avoid UNC paths (\\Server\Folder\File.xlsx) unless the SQL service has domain permissions.

Check Event Viewer logs under Application → MSSQLSERVER for detailed provider errors.

Alternative Approaches

If the problem persists, consider alternatives:

  • Use Import/Export Wizard (in SSMS) instead of OPENROWSET.
  • Use BULK INSERT for CSV data.
  • For Access, use ODBC Linked Tables or .NET Integration in your application layer.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: Developers' Best Practices for SQL Server Security

clock November 3, 2025 06:26 by author Peter

One of the most crucial components of any database system is security. As developers, we frequently overlook the fact that a single unsafe query or open permission can reveal private company information in favor of performance. With examples, explanations, and a straightforward flowchart, let's examine SQL Server Security Best Practices that all developers should adhere to in this post.

The Significance of Security
Critical data, such as user passwords, financial transactions, personal information, etc., is stored by modern applications.
If your SQL Server isn't set up securely, hackers can take advantage of weaknesses like:

  • Injection of SQL
  • Elevation of Privilege
  • Information Leakage
  • Unauthorized Entry
That’s why database-level security must be a shared responsibility between developers and DBAs.


Security Layers in SQL Server
Before jumping into best practices, understand that SQL Server security has multiple layers :

  • Authentication: Who can access the server
  • Authorization: What actions they can perform
  • Encryption: How data is protected in transit and at rest
  • Auditing: Tracking who did what and when

Best Practices for Developers
Let’s break down the most essential security best practices step by step.

1. Use Parameterized Queries (Prevent SQL Injection)

Never concatenate user input directly in your SQL statements.

Vulnerable Example

string query = "SELECT * FROM Users WHERE Username = '" + userInput + "'";

Safe Example
SqlCommand cmd = new SqlCommand("SELECT * FROM Users WHERE Username = @Username", conn);
cmd.Parameters.AddWithValue("@Username", userInput);

Why: Parameterized queries ensure that input is treated as data, not executable SQL, preventing SQL injection attacks.

2. Follow the Principle of Least Privilege
Grant only the permissions required — nothing more.

Don’t

  • Use sa or system admin accounts for applications.
  • Give db_owner role to every user. 

Do

  • Create application-specific users with limited access.
  • Assign roles like db_datareader or db_datawriter as needed.

3. Secure Connection Strings
Never store connection strings in plain text inside your source code.
Use Configuration Files or Secrets Manager:

  • .NET Core: Store in appsettings.json and protect with User Secrets or Azure Key Vault.
  • Windows: Use DPAPI or Encrypted Configuration Sections.

Example
"ConnectionStrings": {
  "DefaultConnection": "Server=myServer;Database=myDB;User Id=appUser;Password=***;"
}


4. Encrypt Sensitive Data
Use SQL Server encryption features to protect confidential data.

Transparent Data Encryption (TDE)

Encrypts the database files (.mdf, .ldf) at rest.
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE MyServerCert;
ALTER DATABASE MyDB SET ENCRYPTION ON;


Column-Level Encryption
Encrypt specific columns like passwords or credit card numbers.

CREATE COLUMN MASTER KEY MyKey
WITH ALGORITHM = RSA_2048;

5. Avoid Hardcoding Credentials
Never hardcode usernames, passwords, or keys in your code.
Always use:

  • Environment variables
  • Secure configuration management
  • Secret stores (e.g., Azure Key Vault, AWS Secrets Manager)

6. Enable Row-Level Security (RLS)
Row-Level Security restricts data visibility based on user or role.

Example
CREATE SECURITY POLICY SalesFilter
ADD FILTER PREDICATE dbo.fnSecurityPredicate(UserID)
ON dbo.Sales WITH (STATE = ON);


This ensures each user can only see data they are authorized to view.

7. Implement Data Masking
Use Dynamic Data Masking to hide sensitive information from unauthorized users.
ALTER TABLE Customers
ALTER COLUMN Email ADD MASKED WITH (FUNCTION = 'email()');

Result
Admin sees full email: [email protected]
Analyst sees masked: [email protected]


8. Regularly Patch and Update SQL Server
Always apply the latest SQL Server Service Packs and Cumulative Updates .
Outdated versions often contain known vulnerabilities that hackers exploit.

9. Use Secure Network Connections (SSL/TLS)
Enable encryption for data in transit.
Force Encryption = Yes


In the connection string:
Encrypt=True;TrustServerCertificate=False;

10. Audit and Monitor Database Activity
Enable SQL Server Audit to track actions such as login attempts, schema changes, or data access.
CREATE SERVER AUDIT Audit_LoginTracking
TO FILE (FILEPATH = 'C:\AuditLogs\')
WITH (ON_FAILURE = CONTINUE);

Then:
ALTER SERVER AUDIT Audit_LoginTracking WITH (STATE = ON);

You can later review logs to identify suspicious activities.

Flowchart: SQL Server Security Flow

Here’s a simple visualization of how SQL Server enforces security at multiple layers:

            ┌─────────────────────────┐
            │   User / Application    │
            └──────────┬──────────────┘
                       │
                       ▼
         ┌────────────────────────┐
         │ Authentication Layer   │
         │ (Login / Password / AD)│
         └─────────────┬──────────┘
                       │
                       ▼
         ┌────────────────────────┐
         │ Authorization Layer    │
         │ (Roles, Permissions)   │
         └─────────────┬──────────┘
                       │
                       ▼
         ┌─────────────────────────┐
         │ Row-Level / Data Access │
         │ (RLS, Masking, Filters) │
         └─────────────┬───────────┘
                       │
                       ▼
         ┌─────────────────────────┐
         │ Encryption Layer        │
         │ (TDE, SSL, Column Key)  │
         └─────────────┬───────────┘
                       │
                       ▼
         ┌─────────────────────────┐
         │ Auditing & Monitoring   │
         │ (Logs, Alerts, Reports) │
         └─────────────────────────┘

This layered approach ensures defense at every step of the data access process.

Final Checklist for Developers

Security AreaBest PracticeExample

Input Handling

Parameterized Queries

@param

Access Control

Least Privilege

Limited Roles

Data Protection

Encryption & Masking

TDE / AES

Secrets Management

No Hardcoded Credentials

Azure Key Vault

Monitoring

SQL Server Audit

Audit Logs


Conclusion

Database security must be incorporated into the application from the start by developers, not only DBAs.

Together, authentication, authorization, encryption, and auditing make up a safe SQL Server configuration.
Recall:
"Security problems can ruin your company, but performance problems can harm your app."

Thus, make SQL Server security a habit rather than a checklist by adhering to these procedures constantly.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: Understanding Complicated SQL Server Queries: Converting Information into Understanding

clock October 29, 2025 08:43 by author Peter

SQL Server continues to be one of the most potent relational database systems in today's data-driven world. However, the ability to write sophisticated SQL queries frequently distinguishes novices from pros. Business reports, analytics dashboards, and performance-driven apps are built on complex queries.

Joins, subqueries, window functions, CTEs, pivoting, and other real-world complex SQL Server queries will all be covered in this article with clear examples.

1. Using Subqueries for Conditional Data Retrieval
Scenario:
You need to find all employees whose salary is higher than the average salary of their department.
Query:
SELECT
    EmpName,
    DepartmentId,
    Salary
FROM Employees E
WHERE Salary > (
    SELECT AVG(Salary)
    FROM Employees
    WHERE DepartmentId = E.DepartmentId
);


Explanation:

  • The inner query calculates the average salary per department.
  • The outer query compares each employee’s salary to that departmental average.

This is a correlated subquery, as it depends on the outer query.

2. Combining Multiple Tables with JOINS
Scenario:
Retrieve all orders along with customer names and product details.
Query:
SELECT
    C.CustomerName,
    O.OrderId,
    P.ProductName,
    O.OrderDate,
    OD.Quantity,
    (OD.Quantity * OD.UnitPrice) AS TotalAmount
FROM Orders O
INNER JOIN Customers C ON O.CustomerId = C.CustomerId
INNER JOIN OrderDetails OD ON O.OrderId = OD.OrderId
INNER JOIN Products P ON OD.ProductId = P.ProductId
WHERE O.OrderDate >= '2025-01-01'
ORDER BY O.OrderDate DESC;


Explanation:
This query combines four tables using inner joins to give a comprehensive view of orders placed in 2025.

3. Ranking Data Using Window Functions
Scenario:
List top 3 highest-paid employees in each department.
Query:
SELECT
    DepartmentId,
    EmpName,
    Salary,
    RANK() OVER (PARTITION BY DepartmentId ORDER BY Salary DESC) AS SalaryRank
FROM Employees
WHERE Salary IS NOT NULL


Then wrap it to filter top 3:
SELECT *
FROM (
    SELECT
        DepartmentId,
        EmpName,
        Salary,
        RANK() OVER (PARTITION BY DepartmentId ORDER BY Salary DESC) AS SalaryRank
    FROM Employees
) AS Ranked
WHERE SalaryRank <= 3;


Explanation:
The RANK() function assigns ranking per department.
The outer query filters the top 3 salaries per department.

4. Simplifying Logic with Common Table Expressions (CTE)
Scenario:
Find employees who earn more than the average salary in their department (using CTE for clarity).
Query:
WITH DeptAverage AS (
    SELECT
        DepartmentId,
        AVG(Salary) AS AvgSalary
    FROM Employees
    GROUP BY DepartmentId
)
SELECT
    E.EmpName,
    E.DepartmentId,
    E.Salary,
    D.AvgSalary
FROM Employees E
JOIN DeptAverage D ON E.DepartmentId = D.DepartmentId
WHERE E.Salary > D.AvgSalary;

Explanation:
The CTE (DeptAverage) simplifies complex nested subqueries, making the query more readable and maintainable.

5. Transforming Rows to Columns with PIVOT
Scenario:
Show total sales by product across different months.
Query:
SELECT *
FROM (
    SELECT
        ProductName,
        DATENAME(MONTH, OrderDate) AS [Month],
        (Quantity * UnitPrice) AS TotalSales
    FROM OrderDetails OD
    JOIN Orders O ON OD.OrderId = O.OrderId
    JOIN Products P ON OD.ProductId = P.ProductId
) AS SourceTable
PIVOT (
    SUM(TotalSales)
    FOR [Month] IN ([January], [February], [March], [April], [May], [June])
) AS PivotTable;

Explanation:
This query pivots monthly sales data into columns, allowing easier visualization and reporting.

6. Recursive Queries with CTE
Scenario:
Retrieve a hierarchical list of employees under a specific manager.
Query:
WITH EmployeeHierarchy AS (
    SELECT
        EmpId, EmpName, ManagerId
    FROM Employees
    WHERE ManagerId IS NULL  -- Top-level managers

    UNION ALL

    SELECT
        E.EmpId, E.EmpName, E.ManagerId
    FROM Employees E
    INNER JOIN EmployeeHierarchy EH ON E.ManagerId = EH.EmpId
)
SELECT * FROM EmployeeHierarchy;


Explanation:
This recursive CTE walks through employee-manager relationships to display a full organizational hierarchy.

Performance Tip
When dealing with large datasets:

  • Use indexes on join and filter columns.
  • Avoid using SELECT *; only fetch required columns.
  • Check execution plans using SQL Server Management Studio (SSMS).
  • Use temp tables or CTEs for readability and modularization.

Conclusion
Complex SQL queries are powerful tools for solving real-world data problems — from business intelligence reports to analytics dashboards. By mastering subqueries, CTEs, window functions, and pivots, you can transform SQL Server from a data store into an insight engine.

Keep practicing on realistic datasets like AdventureWorks or Northwind — and soon, you’ll write elegant and efficient SQL like a pro!

Next Steps:

  • Try rewriting one of your existing reports using a CTE.
  • Use RANK() and PIVOT for analytics dashboards.
  • Explore query optimization using SET STATISTICS IO, TIME ON.


European SQL Server 2022 Hosting :: SQL Server and Other Database Systems' Advanced Authentication Types

clock October 23, 2025 07:52 by author Peter

SQL Authentication
SQL Authentication, also known as SQL Login, stores credentials (username and password) inside the SQL Server instance. It’s a database-managed authentication model, independent of Active Directory or Windows accounts.

Features

  • Uses login name and password stored in SQL Server.
  • Ideal for non-domain clients or cross-platform applications.
  • Supports backward compatibility for legacy systems.

Advantages

  • Works outside of domain networks.
  • Simple to set up for service accounts or third-party tools.
  • Enables easy app-level authentication control.

Disadvantages

  • Passwords must be managed manually.
  • No centralized policy enforcement (e.g., MFA).
  • Slightly higher attack surface if passwords are weak.

Best Practices

  • Use strong password policies and encrypt connections (TLS).
  • Restrict login privileges.
  • Periodically rotate credentials and audit logins.

Windows Authentication
Windows Authentication (also known as Integrated Security or Trusted Connection) uses the Windows or Active Directory identity of the user to authenticate.
It’s the most secure and preferred option for enterprise setups.

Features

  • Uses Kerberos or NTLM protocols for verification.
  • Credentials are not sent directly to SQL Server.
  • Enables Single Sign-On (SSO) via Active Directory.

Advantages

  • Centralized identity control via AD.
  • Strong password and lockout policies automatically applied.
  • Easy role-based access using AD groups.

Disadvantages

  • Requires domain membership or trust relationships.
  • Not ideal for external or Linux-based clients.

Best Practices

  • Prefer Kerberos over NTLM for better security.
  • Use AD groups for permission management.
  • Audit AD memberships regularly.

Other Authentication Methods (Short Notes)

Authentication TypeDescriptionTypical Use
Certificate-Based Authentication Uses X.509 certificates for passwordless access. Secure app-to-app or server-to-server communication.
OAuth 2.0 / OpenID Connect Token-based identity used in web and cloud APIs. SaaS and microservice applications.
Kerberos Delegation Extends Windows auth for multi-tier app scenarios. Linked servers or service accounts.
External Identity Providers (Okta, Ping, etc.) Federated identity integration across platforms. Enterprise SSO environments.

Summary Table

TypeSecurity LevelIdentity SourceIdeal For
SQL Authentication Moderate SQL Server Legacy or non-domain systems
Windows Authentication High Active Directory On-prem enterprise setups
Azure AD Authentication Very High Entra ID Cloud or hybrid systems

HostForLIFEASP.NET SQL Server 2022 Hosting

 

 



European SQL Server 2022 Hosting :: How to Backup Data from Tables in SQL Server (Tables 1 and 2)?

clock October 21, 2025 08:45 by author Peter

In order to generate a data backup from the current tables (Tables 1 and 2),

SELECT INTO and INSERT INTO are two popular SQL techniques that you can employ.

Whether or not backup tables are already in place determines the option.

1. Quick Backup with Table Creation using SELECT INTO
Using this method, data from the source table is directly copied into a new table.

SELECT * INTO table1_Backup FROM table1;
SELECT * INTO table2_Backup FROM table2;
  • Note: This will fail if table1_Backup or table2_Backup already exist.
  • Ideal for one-time backups or quick cloning.

 2. Using INSERT INTO – Backup into Existing Tables

Use this method when the backup tables are already created (i.e., the schema is predefined).

INSERT INTO table1_Backup SELECT * FROM table1;
INSERT INTO table2_Backup SELECT * FROM table2;
  • No error if backup tables already exist.
  • Useful for regular/daily backups where the structure is fixed.

Summary

Method Creates Backup Table? Use Case
SELECT INTO Yes First-time backup / fast duplication
INSERT INTO No (table must exist) Repeated backups / controlled schema

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: A Comprehensive Guide to Linked Servers in SQL Server with Examples

clock October 16, 2025 10:16 by author Peter

Data is often spread across several SQL Servers or even distinct database systems (such as Oracle, MySQL, or PostgreSQL) in enterprise settings.  SQL Server provides a robust feature called Linked Server that enables you to access and manipulate data across several servers as if they were a single database, eliminating the need for manual data exporting and import.

Using real-world examples, this article describes what a linked server is, how to set it up, and how to query remote data.

What is a Linked Server?
A Linked Server in SQL Server allows you to connect to another database server instance (on the same network or remote) and execute distributed queries (SELECT, INSERT, UPDATE, DELETE) against OLE DB data sources outside of the local SQL Server.

It enables:

  • Cross-server queries
  • Centralized data access
  • Remote procedure execution (EXECUTE AT)
  • Joining tables from different servers

Setting Up a Linked Server
You can create a Linked Server via SQL Server Management Studio (SSMS) or T-SQL script.

Method 1: Using SSMS GUI
Open SSMS → Expand Server Objects → Right-click on Linked Servers → Choose New Linked Server

In the dialog box:

  • Linked server: Enter an alias name (e.g., LinkedServer_Prod)
  • Server type: Choose SQL Server or Other data source
  • Provider: Select Microsoft OLE DB Provider for SQL Server
  • Data source: Enter remote server name or IP

Go to the Security tab and configure credentials:

  • Option 1: Use the current user’s security context
  • Option 2: Specify a remote login and password

Click OK to create the Linked Server.

Method 2: Using T-SQL Script

Here’s how to create a Linked Server using SQL script:
EXEC sp_addlinkedserver
    @server = 'LinkedServer_Prod',
    @srvproduct = '',
    @provider = 'SQLNCLI',
    @datasrc = '192.168.1.100'; -- Remote Server IP or Name

EXEC sp_addlinkedsrvlogin
    @rmtsrvname = 'LinkedServer_Prod',
    @useself = 'false',
    @locallogin = NULL,
    @rmtuser = 'sa',
    @rmtpassword = 'StrongPassword123';


Example: Querying Data from a Linked Server
Once the Linked Server is created, you can query it using four-part naming convention:
[LinkedServerName].[DatabaseName].[SchemaName].[TableName]

Example 1: Simple SELECT query
SELECT TOP 10 *
FROM LinkedServer_Prod.SalesDB.dbo.Customers;


Example 2: Joining Local and Remote Tables
SELECT
    a.OrderID,
    a.CustomerID,
    b.CustomerName
FROM LocalDB.dbo.Orders a
INNER JOIN LinkedServer_Prod.SalesDB.dbo.Customers b
    ON a.CustomerID = b.CustomerID;

Example 3: Executing Remote Stored Procedure
EXEC LinkedServer_Prod.SalesDB.dbo.sp_GetTopCustomers @TopCount = 5;

Updating Remote Data
You can even insert or update remote tables via Linked Server.

Example 4: Insert into remote table

INSERT INTO LinkedServer_Prod.SalesDB.dbo.Customers (CustomerName, City)
VALUES ('Peter', 'London');

Example 5: Update remote data
UPDATE LinkedServer_Prod.SalesDB.dbo.Customers
SET City = 'Udaipur'
WHERE CustomerID = 101;


Best Practices

  • Use SQL authentication with strong passwords for remote login.
  • Enable RPC and RPC Out only if needed.
  • Use OPENQUERY() for performance optimization with complex joins.
  • Limit access by creating specific database roles and permissions.

Using OPENQUERY (Performance Tip)
Instead of the four-part naming method, use OPENQUERY to push the query execution to the remote server:
SELECT *
FROM OPENQUERY(LinkedServer_Prod, 'SELECT CustomerName, City FROM SalesDB.dbo.Customers WHERE City = ''London''');


This approach reduces data transfer and often performs faster.

Removing a Linked Server
When you no longer need a Linked Server, remove it safely:
EXEC sp_dropserver 'LinkedServer_Prod', 'droplogins';

Conclusion
Linked Servers in SQL Server are a powerful way to integrate and access distributed data sources without complex ETL processes. With proper configuration, they can significantly improve data collaboration and reduce maintenance efforts across multiple systems.

However, always monitor performance and secure connections to prevent unauthorized access.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: How Do I Write SQL to Get Top N Records Per Group?

clock October 10, 2025 13:18 by author Peter

One of the most common problems in SQL development is retrieving the top N rows per group. For example, if you’re working with sales data, you might want to fetch the top 3 sales per customer or the highest salary per department. This query pattern is widely used in reporting, analytics, and data visualization.

In this guide, we’ll explain step by step how to write SQL queries to get top N records per group using simple and advanced techniques. We’ll cover approaches for popular databases like SQL Server, MySQL, and PostgreSQL.

Why Do We Need Top N Records Per Group?
Imagine you have an Orders table, and you want to get the top 2 recent orders for each customer. Without grouping, you’d only get the overall top records. With grouping, you can analyze data per category, user, or department.

Real-world use cases include:

  • Getting the top 3 highest-paid employees per department
  • Fetching the latest 5 transactions per user
  • Retrieving the top 2 selling products in each region

Method 1. Using ROW_NUMBER() in SQL Server
The most common solution uses the ROW_NUMBER() window function.

Example: Top 2 orders per customer
SELECT *
FROM (
    SELECT
        CustomerID,
        OrderID,
        OrderDate,
        ROW_NUMBER() OVER (PARTITION BY CustomerID ORDER BY OrderDate DESC) AS RowNum
    FROM Orders
) AS Ranked
WHERE RowNum <= 2;


Explanation

  • ROW_NUMBER() assigns a unique number to each row within a group.
  • PARTITION BY CustomerID ensures numbering starts fresh for each customer.
  • ORDER BY OrderDate DESC sorts records so the most recent orders are ranked first.
  • Finally, WHERE RowNum <= 2 filters top 2 per group.

Method 2. Using RANK() or DENSE_RANK()
Sometimes you want ties to be included. Instead of ROW_NUMBER(), you can use RANK() or DENSE_RANK().
SELECT *
FROM (
    SELECT
        Department,
        EmployeeName,
        Salary,
        RANK() OVER (PARTITION BY Department ORDER BY Salary DESC) AS Rank
    FROM Employees
) AS Ranked
WHERE Rank <= 3;


Difference
RANK() allows skipping numbers in case of ties.
DENSE_RANK() keeps numbers continuous.

Method 3. Using CROSS APPLY (SQL Server)

Another efficient way in SQL Server is with CROSS APPLY.
SELECT c.CustomerID, o.OrderID, o.OrderDate
FROM Customers c
CROSS APPLY (
    SELECT TOP 2 *
    FROM Orders o
    WHERE o.CustomerID = c.CustomerID
    ORDER BY o.OrderDate DESC
) o;

Why it works: For each customer, SQL applies a subquery to fetch top 2 rows.

Method 4. MySQL and PostgreSQL (LIMIT + Subquery)
In MySQL and PostgreSQL (before window functions were supported), developers used correlated subqueries.
SELECT o1.*
FROM Orders o1
WHERE (
   SELECT COUNT(*)
   FROM Orders o2
   WHERE o2.CustomerID = o1.CustomerID
     AND o2.OrderDate > o1.OrderDate
) < 2;


Here, each row checks how many newer rows exist. If fewer than 2, then it’s in the top 2.

Common Mistakes

  • Forgetting PARTITION BY, which groups the data.
  • Using TOP without applying grouping logic.
  • Not ordering correctly, which gives wrong results.

Best Practices

  • Always use ROW_NUMBER() when you need unique ordering.
  • Use RANK() or DENSE_RANK() if ties matter.
  • Make sure you filter correctly (WHERE RowNum <= N).
  • For very large datasets, ensure indexes exist on partition and order by columns.

SQL Server Skill Challenge, Test What You Learned!
Now that you’ve learned how to fetch top N records per group in SQL, it’s time to put your skills to the test! 🎉

Take the SQL Server Skill Challenge and apply what you’ve learned. You’ll earn Sharp Tokens 🏆 as a reward for your knowledge!

Don’t just read—apply your skills and get rewarded today!.

Conclusion

Getting the top N records per group in SQL is a common but tricky problem. With the right use of window functions (ROW_NUMBER, RANK, DENSE_RANK) or techniques like CROSS APPLY and LIMIT, you can write efficient queries for SQL Server, MySQL, and PostgreSQL.

HostForLIFEASP.NET SQL Server 2022 Hosting



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in