European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

European SQL Server 2022 Hosting :: A Comprehensive Guide to Linked Servers in SQL Server with Examples

clock October 16, 2025 10:16 by author Peter

Data is often spread across several SQL Servers or even distinct database systems (such as Oracle, MySQL, or PostgreSQL) in enterprise settings.  SQL Server provides a robust feature called Linked Server that enables you to access and manipulate data across several servers as if they were a single database, eliminating the need for manual data exporting and import.

Using real-world examples, this article describes what a linked server is, how to set it up, and how to query remote data.

What is a Linked Server?
A Linked Server in SQL Server allows you to connect to another database server instance (on the same network or remote) and execute distributed queries (SELECT, INSERT, UPDATE, DELETE) against OLE DB data sources outside of the local SQL Server.

It enables:

  • Cross-server queries
  • Centralized data access
  • Remote procedure execution (EXECUTE AT)
  • Joining tables from different servers

Setting Up a Linked Server
You can create a Linked Server via SQL Server Management Studio (SSMS) or T-SQL script.

Method 1: Using SSMS GUI
Open SSMS → Expand Server Objects → Right-click on Linked Servers → Choose New Linked Server

In the dialog box:

  • Linked server: Enter an alias name (e.g., LinkedServer_Prod)
  • Server type: Choose SQL Server or Other data source
  • Provider: Select Microsoft OLE DB Provider for SQL Server
  • Data source: Enter remote server name or IP

Go to the Security tab and configure credentials:

  • Option 1: Use the current user’s security context
  • Option 2: Specify a remote login and password

Click OK to create the Linked Server.

Method 2: Using T-SQL Script

Here’s how to create a Linked Server using SQL script:
EXEC sp_addlinkedserver
    @server = 'LinkedServer_Prod',
    @srvproduct = '',
    @provider = 'SQLNCLI',
    @datasrc = '192.168.1.100'; -- Remote Server IP or Name

EXEC sp_addlinkedsrvlogin
    @rmtsrvname = 'LinkedServer_Prod',
    @useself = 'false',
    @locallogin = NULL,
    @rmtuser = 'sa',
    @rmtpassword = 'StrongPassword123';


Example: Querying Data from a Linked Server
Once the Linked Server is created, you can query it using four-part naming convention:
[LinkedServerName].[DatabaseName].[SchemaName].[TableName]

Example 1: Simple SELECT query
SELECT TOP 10 *
FROM LinkedServer_Prod.SalesDB.dbo.Customers;


Example 2: Joining Local and Remote Tables
SELECT
    a.OrderID,
    a.CustomerID,
    b.CustomerName
FROM LocalDB.dbo.Orders a
INNER JOIN LinkedServer_Prod.SalesDB.dbo.Customers b
    ON a.CustomerID = b.CustomerID;

Example 3: Executing Remote Stored Procedure
EXEC LinkedServer_Prod.SalesDB.dbo.sp_GetTopCustomers @TopCount = 5;

Updating Remote Data
You can even insert or update remote tables via Linked Server.

Example 4: Insert into remote table

INSERT INTO LinkedServer_Prod.SalesDB.dbo.Customers (CustomerName, City)
VALUES ('Peter', 'London');

Example 5: Update remote data
UPDATE LinkedServer_Prod.SalesDB.dbo.Customers
SET City = 'Udaipur'
WHERE CustomerID = 101;


Best Practices

  • Use SQL authentication with strong passwords for remote login.
  • Enable RPC and RPC Out only if needed.
  • Use OPENQUERY() for performance optimization with complex joins.
  • Limit access by creating specific database roles and permissions.

Using OPENQUERY (Performance Tip)
Instead of the four-part naming method, use OPENQUERY to push the query execution to the remote server:
SELECT *
FROM OPENQUERY(LinkedServer_Prod, 'SELECT CustomerName, City FROM SalesDB.dbo.Customers WHERE City = ''London''');


This approach reduces data transfer and often performs faster.

Removing a Linked Server
When you no longer need a Linked Server, remove it safely:
EXEC sp_dropserver 'LinkedServer_Prod', 'droplogins';

Conclusion
Linked Servers in SQL Server are a powerful way to integrate and access distributed data sources without complex ETL processes. With proper configuration, they can significantly improve data collaboration and reduce maintenance efforts across multiple systems.

However, always monitor performance and secure connections to prevent unauthorized access.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: How Do I Write SQL to Get Top N Records Per Group?

clock October 10, 2025 13:18 by author Peter

One of the most common problems in SQL development is retrieving the top N rows per group. For example, if you’re working with sales data, you might want to fetch the top 3 sales per customer or the highest salary per department. This query pattern is widely used in reporting, analytics, and data visualization.

In this guide, we’ll explain step by step how to write SQL queries to get top N records per group using simple and advanced techniques. We’ll cover approaches for popular databases like SQL Server, MySQL, and PostgreSQL.

Why Do We Need Top N Records Per Group?
Imagine you have an Orders table, and you want to get the top 2 recent orders for each customer. Without grouping, you’d only get the overall top records. With grouping, you can analyze data per category, user, or department.

Real-world use cases include:

  • Getting the top 3 highest-paid employees per department
  • Fetching the latest 5 transactions per user
  • Retrieving the top 2 selling products in each region

Method 1. Using ROW_NUMBER() in SQL Server
The most common solution uses the ROW_NUMBER() window function.

Example: Top 2 orders per customer
SELECT *
FROM (
    SELECT
        CustomerID,
        OrderID,
        OrderDate,
        ROW_NUMBER() OVER (PARTITION BY CustomerID ORDER BY OrderDate DESC) AS RowNum
    FROM Orders
) AS Ranked
WHERE RowNum <= 2;


Explanation

  • ROW_NUMBER() assigns a unique number to each row within a group.
  • PARTITION BY CustomerID ensures numbering starts fresh for each customer.
  • ORDER BY OrderDate DESC sorts records so the most recent orders are ranked first.
  • Finally, WHERE RowNum <= 2 filters top 2 per group.

Method 2. Using RANK() or DENSE_RANK()
Sometimes you want ties to be included. Instead of ROW_NUMBER(), you can use RANK() or DENSE_RANK().
SELECT *
FROM (
    SELECT
        Department,
        EmployeeName,
        Salary,
        RANK() OVER (PARTITION BY Department ORDER BY Salary DESC) AS Rank
    FROM Employees
) AS Ranked
WHERE Rank <= 3;


Difference
RANK() allows skipping numbers in case of ties.
DENSE_RANK() keeps numbers continuous.

Method 3. Using CROSS APPLY (SQL Server)

Another efficient way in SQL Server is with CROSS APPLY.
SELECT c.CustomerID, o.OrderID, o.OrderDate
FROM Customers c
CROSS APPLY (
    SELECT TOP 2 *
    FROM Orders o
    WHERE o.CustomerID = c.CustomerID
    ORDER BY o.OrderDate DESC
) o;

Why it works: For each customer, SQL applies a subquery to fetch top 2 rows.

Method 4. MySQL and PostgreSQL (LIMIT + Subquery)
In MySQL and PostgreSQL (before window functions were supported), developers used correlated subqueries.
SELECT o1.*
FROM Orders o1
WHERE (
   SELECT COUNT(*)
   FROM Orders o2
   WHERE o2.CustomerID = o1.CustomerID
     AND o2.OrderDate > o1.OrderDate
) < 2;


Here, each row checks how many newer rows exist. If fewer than 2, then it’s in the top 2.

Common Mistakes

  • Forgetting PARTITION BY, which groups the data.
  • Using TOP without applying grouping logic.
  • Not ordering correctly, which gives wrong results.

Best Practices

  • Always use ROW_NUMBER() when you need unique ordering.
  • Use RANK() or DENSE_RANK() if ties matter.
  • Make sure you filter correctly (WHERE RowNum <= N).
  • For very large datasets, ensure indexes exist on partition and order by columns.

SQL Server Skill Challenge, Test What You Learned!
Now that you’ve learned how to fetch top N records per group in SQL, it’s time to put your skills to the test! 🎉

Take the SQL Server Skill Challenge and apply what you’ve learned. You’ll earn Sharp Tokens 🏆 as a reward for your knowledge!

Don’t just read—apply your skills and get rewarded today!.

Conclusion

Getting the top N records per group in SQL is a common but tricky problem. With the right use of window functions (ROW_NUMBER, RANK, DENSE_RANK) or techniques like CROSS APPLY and LIMIT, you can write efficient queries for SQL Server, MySQL, and PostgreSQL.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: How to Improve SQL Queries to Run More Quickly in Big Databases?

clock September 25, 2025 09:06 by author Peter

The Significance of Optimizing SQL Queries
Slow SQL queries have the potential to affect your entire system when working with big databases that contain millions of information. In addition to taking longer to respond, poorly worded queries use more CPU, memory, and storage space. Query optimization guarantees speedy processing of reports, queries, and transactions in applications such as e-commerce, banking, analytics, and log management systems.

Making your queries smarter rather than heavier is the goal of query optimization. You may significantly cut down on execution time and increase scalability by utilizing indexing, effective filtering, caching, and partitioning.

Use Indexes to Speed Up Searches
Indexes are like the index of a book – instead of flipping through every page, the database can jump directly to the relevant section.

  • Create indexes on columns frequently used in WHERE, JOIN, and ORDER BY.
  • Use composite indexes when queries filter by multiple columns.
  • Avoid over-indexing, as too many indexes can slow down INSERT and UPDATE operations.

-- Adding index for faster lookupsCREATE INDEX idx_customers_email ON customers(email);

Indexes can reduce query time from several seconds to milliseconds in large databases.

Select Only the Columns You Need

Using SELECT * is one of the most common mistakes in SQL. It retrieves all columns, even when you need only a few. This adds unnecessary I/O and slows down queries.
-- Bad: retrieves everythingSELECT * FROM orders;

-- Good: retrieves only needed dataSELECT order_id, customer_id, total_amount FROM orders;


This practice is especially important in wide tables with dozens of columns.

Write Efficient Joins

Joins are powerful but can be costly if not written carefully.

  • Ensure the join columns are indexed.
  • Use INNER JOIN instead of LEFT JOIN when you only need matching rows.
  • Avoid redundant joins if the data can be obtained from a single table.

SELECT c.name, o.order_id
FROM customers c
INNER JOIN orders o ON c.customer_id = o.customer_id;


Efficient joins prevent full table scans and make queries scale better in large datasets.

Filter Data Properly with WHERE Clauses

Well-structured WHERE clauses help the database use indexes effectively.

  • Avoid applying functions directly to indexed columns.
  • Rewrite queries using date ranges or numeric ranges instead of transformations.


-- Bad: prevents index usageSELECT * FROM sales WHERE YEAR(sale_date) = 2025;

-- Good: uses index efficientlySELECT * FROM sales
WHERE sale_date >= '2025-01-01' AND sale_date < '2026-01-01';


This approach drastically reduces execution time when working with time-based queries.

Limit the Number of Rows You Retrieve

Fetching millions of rows when you only need the latest 100 records wastes time and resources. Always use LIMIT or TOP.
SELECT * FROM logs ORDER BY log_time DESC LIMIT 100;

This is crucial for dashboards, reports, and log systems that only display recent activity.

Check and Understand Execution Plans
Execution plans show how the database engine processes a query. By analyzing them, you can find bottlenecks.

  • Use EXPLAIN in MySQL/PostgreSQL or SET SHOWPLAN_ALL ON in SQL Server.
  • Watch for full table scans, which indicate the query is ignoring indexes.
  • Optimize queries so the database performs index seeks instead of scans.

EXPLAIN SELECT * FROM orders WHERE customer_id = 1001;

Execution plans are your best tool for diagnosing and fixing slow queries.

Use Partitioning for Very Large Tables
Partitioning splits a huge table into smaller, more manageable parts. This way, queries only scan the relevant partition instead of the whole dataset.
Example: Partition a sales table by year. Queries that fetch 2025 sales only look at the 2025 partition, reducing execution time dramatically.

Cache Expensive Queries
If your query is frequently executed and rarely changes, caching can save time.

  • Use application-level caching with Redis or Memcached.
  • Use materialized views (in PostgreSQL, Oracle, etc.) for pre-computed results.

Caching reduces repeated execution of heavy queries and improves response times.

Rewrite Subqueries for Better Performance

Subqueries inside IN clauses or correlated subqueries often slow queries down. Rewriting them with EXISTS or JOIN usually helps.
-- Slow versionSELECT name FROM customers
WHERE customer_id IN (SELECT customer_id FROM orders);

-- Optimized versionSELECT name FROM customers c
WHERE EXISTS (SELECT 1 FROM orders o WHERE o.customer_id = c.customer_id);


This improves performance in large databases with millions of rows.

Insert and Update in Batches

Single-row inserts and updates can be slow. Batch operations reduce overhead by grouping multiple records into a single query.
-- Slow: multiple insertsINSERT INTO sales VALUES (1, '2025-09-01', 100);
INSERT INTO sales VALUES (2, '2025-09-02', 200);

-- Fast: batch insertINSERT INTO sales (id, sale_date, amount)
VALUES
(1, '2025-09-01', 100),
(2, '2025-09-02', 200);


This is essential for ETL pipelines and bulk data processing.

Real-World Case Study: Before vs After Optimization
Scenario

An e-commerce platform runs a query on a table with 50 million orders. The original query took 12 seconds to complete.

Original Query (Slow)

SELECT *FROM orders
WHERE YEAR(order_date) = 2025AND customer_email = '[email protected]'ORDER BY order_date DESC;

  • Used SELECT * → fetched all columns unnecessarily.
  • Used YEAR(order_date) → prevented index usage.
  • No index on customer_email → forced full table scan.

Execution Plan: Scanned all 50M rows.

Performance: ~12 seconds.

Optimized Query (Fast)

-- Create composite indexCREATE INDEX idx_orders_email_date
ON orders (customer_email, order_date);

-- Optimized querySELECT order_id, order_date, total_amount
FROM orders
WHERE customer_email = '[email protected]'AND order_date >= '2025-01-01'AND order_date < '2026-01-01'ORDER BY order_date DESC
LIMIT 100;


Selected only the necessary columns.

  • Replaced YEAR() with a date range to keep the index usable.
  • Created a composite index on (customer_email, order_date).
  • Added LIMIT for pagination.

Before vs After Comparison

FactorBeforeAfter

Query Time

~12s

~0.15s

Rows Scanned

50M

~2,500

CPU Usage

High

Low

User Experience

Slow

Instant

Visual Flow of SQL Optimization

flowchart TD
    A[Slow Query Detected] --> B[Check Execution Plan]
    B --> C[Add or Adjust Indexes]
    C --> D[Rewrite SELECT, JOIN, WHERE Clauses]
    D --> E[Apply LIMIT or Pagination]
    E --> F[Use Partitioning or Caching]
    F --> G[Measure Performance Again]

Summary
Optimizing SQL queries is essential when working with large databases. Small improvements such as using indexes, avoiding SELECT *, rewriting WHERE clauses, limiting results, and analyzing execution plans can cut query times from seconds to milliseconds. For very large datasets, partitioning, caching, and batch operations make queries even faster. The case study showed how a 12-second query was reduced to 0.15 seconds with proper optimization, proving that smart SQL tuning leads to better performance, efficient resource use, and improved user experience.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



SQL Server Hosting - HostForLIFE :: Check Constraint in SQL Server

clock September 18, 2025 10:05 by author Peter

In this article I describe Check Constraints on a table, how to add a Check Constraint to a table, how to remove a Check Constraint from a table, and the limitations of a Check Constraint. Before explaining Check Constraints it is necessary to first explain what Domain Integrity is.

Domain Integrity
Domain Integrity ensures that values are valid for columns and prevents invalid values for columns within a database.

Check Constraint

A Check Constraint is a rule that identifies valid values for columns of data. A Check Constraint helps to enforce Domain Integrity. If the condition in a Check Constraint is not satisfied then it prevents the value from entering into the database.

Syntax
CREATE TABLE tableName (
    Column1 dataType CHECK (expression),
    Column2 dataType,
    ColumnN dataType
);

Example
CREATE TABLE emp (
    empId INT CHECK (empId > 10),
    empName VARCHAR(15)
);

Output

If we want to insert a record with less than 10 IDs then it shows the error.
INSERT INTO emp (empId, empName)
VALUES (8, 'd');


Output

Dropping the Check Constraint
First of all, we can determine the name of the constraint using the following command.
EXEC sp_help emp;

Output

Now execute the following command.
ALTER TABLE emp
DROP CONSTRAINT CK__emp__empId__1A14E395;


Output

Adding the Check Constraint
ALTER TABLE emp
ADD CHECK (empId > 15);


Output

Limitation
The Check Constraint rejects the values that are invalid or we can say which does not satisfy the Check Conditions. But in the case of a null, a Check Constraint will allow it to be insert into the database.

Insertion of Null value
INSERT INTO emp
VALUES (NULL, 'd');

Output

Summary
In this article, I described a Check Constraint in SQL Server. I hope this article has helped you to understand this topic. Please share if you know more about this. Your feedback and constructive contributions are welcome.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: Developing, Protecting, and Improving SQL Server: A Practical Example of E-Commerce

clock September 9, 2025 06:54 by author Peter

1. Design of Schemas
Essential Ideas

  • To cut down on redundancy, normalize
  • Employ restrictions and foreign keys.
  • For auditing, keep track of timestamps.

Schema Diagram
Customers ───< Orders ───< OrderItems >─── Products

Example Table: Customers

ColumnTypeNotes
CustomerID INT (PK) Identity
FirstName NVARCHAR(50) Not Null
LastName NVARCHAR(50) Not Null
Email NVARCHAR(100) Unique, Not Null
CreatedAt DATETIME2 Default SYSUTCDATETIME()

Checklist

  • Primary & foreign keys defined
  • Appropriate data types
  • Default timestamps added

2. Indexing & Query Tuning
Scenario: Calculate total spending per customer.

Query

SELECT c.FirstName, c.LastName, SUM(oi.Quantity * oi.Price) AS TotalSpent
FROM Customers c
JOIN Orders o ON c.CustomerID = o.CustomerID
JOIN OrderItems oi ON o.OrderID = oi.OrderID
WHERE o.OrderDate >= '2025-01-01'GROUP BY c.FirstName, c.LastName
ORDER BY TotalSpent DESC;

Optimization: Add covering indexes to speed up joins.

TableIndexPurpose

Orders

(CustomerID, OrderDate)

Filter & join optimization

OrderItems

(OrderID, ProductID, Quantity, Price)

Aggregate faster

3. ETL: Efficient Data Loading

Scenario: Update daily product prices from CSV.

Process Diagram
CSV File → Staging Table → Merge → Products Table

Example SQL
BULK INSERT ProductStaging FROM 'C:\Data\ProductPrices.csv'
WITH (FIRSTROW = 2, FIELDTERMINATOR = ',', ROWTERMINATOR = '\n');

MERGE Products AS target
USING ProductStaging AS source
ON target.ProductID = source.ProductID
WHEN MATCHED THEN UPDATE SET Price = source.Price
WHEN NOT MATCHED BY TARGET THEN INSERT (Name, Price, Stock) VALUES (source.Name, source.Price, source.Stock);


Checklist

  • Validate incoming data
  • Only update changed records
  • Log ETL results

4. Backup & Recovery
Backup Strategy Diagram

Full Backup → Differential Backup → Transaction Log Backups

Example SQL
BACKUP DATABASE ECommerceDB TO DISK = 'C:\Backups\ECommerceDB_FULL.bak' WITH INIT, COMPRESSION;
BACKUP LOG ECommerceDB TO DISK = 'C:\Backups\ECommerceDB_Log.trn';

Checklist

  • Full backups scheduled
  • Transaction logs backed up for minimal data loss
  • Test restores regularly

5. Security: Controlling Access

Role-Based Access Table

RoleAccess TypeNotes
AppUser Read/Write Application use only
Analyst Read-Only Reporting and analytics only
Admin Full DBAs only

SQL Example
CREATE ROLE ReadOnlyRole;
ALTER ROLE ReadOnlyRole ADD MEMBER [ReportingUser];
DENY INSERT, UPDATE, DELETE TO ReadOnlyRole;

6. Monitoring & Maintenance
Maintenance Flow Diagram
Index Rebuild → Statistics Update → Query Performance Monitoring → Alerts

Example SQL for Long-Running Queries
SELECT TOP 10
    qs.total_elapsed_time/1000 AS TotalMs,
    qs.execution_count,
    SUBSTRING(qt.text, qs.statement_start_offset/2,
        (CASE WHEN qs.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2 ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) AS QueryText
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
ORDER BY TotalMs DESC;


Checklist

  • Monitor slow queries & deadlocks
  • Rebuild fragmented indexes
  • Update statistics regularly
  • Set alerts for critical issues

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: SQL Server Statistics Update

clock September 3, 2025 08:26 by author Peter

The query optimizer, one of SQL Server's primary features, employs statistics to ascertain the query's most effective execution strategy. Statistics are used to calculate the most effective access path to the data and to estimate how many rows a query will return. In order to give the query optimizer the most recent data possible for precise estimations, we will go over how to update statistics in SQL Server in this post.

What are Statistics in SQL Server?
Update Statistics is a command or process in SQL Server that updates the statistical information about the distribution of data in one or more tables or indexes in a database. They consist of summary data that allows the query optimizer to make informed decisions about how to retrieve data efficiently. These statistics help SQL Server determine the most optimal query execution plans, which ultimately impacts query performance.

By default, SQL Server automatically updates statistics when a certain threshold of data changes has been reached. This threshold is known as the “auto-update” threshold. However, there are situations where you may want to update statistics manually to ensure that the query optimizer has the most accurate information to work with.

Why are Statistics Important?
Accurate statistics are critical for SQL Server to generate the most efficient execution plan for a query. If the statistics are outdated or inaccurate, the query optimizer may choose a suboptimal execution plan, resulting in slower query performance. Additionally, inaccurate statistics can lead to suboptimal index usage, which can cause excessive disk I/O and memory usage.

When to Update Statistics in SQL Server?
While SQL Server automatically updates statistics for you, there are certain scenarios where you may want to update statistics manually. These scenarios include:

  • Large Data Changes: If a large percentage of data has been added, modified, or deleted from a table, you may want to update statistics manually to ensure that the query optimizer has the most accurate information.
  • Stale Statistics: If the statistics for a table or indexed view are stale (out of date), you may want to update them manually to ensure that the query optimizer has the most accurate information. This can happen when data is added, modified, or deleted from a table or indexed view after the statistics were last updated.
  • Performance Issues: If you are experiencing performance issues with a particular query, updating statistics may help to improve performance by providing the query optimizer with more accurate information.

How to Update Statistics in SQL Server SQL Server?
Automatic Updates - SQL Server automatically updates statistics when a threshold of data modifications occurs, or when the query optimizer determines that the statistics are out of date. This is the default behavior for SQL Server, and it works well for most databases.
Manual Updates - We can manually update statistics using the UPDATE STATISTICS statement.

Update the statistics for a single table or indexed view using the following command:
UPDATE STATISTICS [table_name]

Where [table_name] is the name of the table or indexed view for which statistics to be updated.

Update the statistics for all tables in a database using the following command:
EXEC sp_updatestats

When statistics getting updated, SQL Server automatically updates the statistics for all columns that have a density value that is out of date. This means that the update may not necessarily update all statistics for all columns in the table or indexed view.

It's important to note that updating statistics can be a resource-intensive operation, particularly for large tables. Therefore, it's important to carefully consider the frequency and timing of statistics updates to ensure they don't negatively impact system performance.
How to view the statistics property?

By using the below T-SQL command:
DBCC SHOW_STATISTICS

For more information, see DBCC SHOW_STATISTICS

We can use ‘sys.stats’ system catalog view, which provides information about statistics for all user-defined and internal tables and indexes in the current database by using following command:
SELECT
    OBJECT_NAME(s.object_id) AS TableName,
    i.name AS IndexName,
    s.name AS StatisticName,
    s.auto_created AS IsAutoCreated,
    s.user_created AS IsUserCreated,
    STATS_DATE(s.object_id, s.stats_id) AS LastUpdated
FROM sys.stats AS s
INNER JOIN sys.indexes AS i
ON s.object_id = i.object_id
  AND s.stats_id = i.index_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
ORDER BY TableName, IndexName

This statement retrieves information about statistics for all user-defined tables and indexes in the database, including the table and index names, statistic name, whether the statistic was automatically or manually created, and the date and time the statistics were last updated.

Examine the “LastUpdated” column to determine whether the statistics are up-to-date. If the value in this column is relatively recent, the statistics are up-to-date. However, if the value is old or NULL, the statistics may be out-of-date and need to be updated.

Conclusion
Statistics are an important component of SQL Server's query optimization process. By providing the query optimizer with accurate information about the distribution of data in a table or indexed view, statistics can help to improve query performance. While SQL Server automatically updates statistics for you, there are certain scenarios where you may want to update them manually to ensure that the query optimizer has the most accurate information. By understanding when and how to update statistics in SQL Server, you can help to ensure that your queries are running as efficiently as possible.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: Understanding SQL Server Index Fragmentation and How to Address It

clock August 21, 2025 08:50 by author Peter

Index fragmentation is a word that many developers and DBAs come across, and some have suffered with its effects while creating and managing databases over the course of an application's lifecycle.

This article will walk you through the process of understanding:

  • What is fragmentation of an index?
  • When and why it occurs
  • The many kinds (external versus internal)
  • How to prevent and correct it
1. Quick Recap: What is an Index?
An index is a data structure that helps locate specific rows quickly. It works by storing sorted values from one or more columns of a table, along with pointers to the corresponding rows.

In SQL Server, data is stored in pages, fixed-size blocks (typically 8KB each). When the database needs to read data, it loads an entire page into memory.

This is more efficient than fetching single rows, as it reduces disk I/O.

When a query is executed:
  • SQL Server checks if an index exists for the queried columns.
  • If it exists, SQL Server uses it to locate the relevant pages.
  • Only those specific pages are read, instead of scanning the whole table.

Query executed
2. What is Index Fragmentation?

Index fragmentation occurs when the logical order of data in an index does not match the physical storage order.
  • Logical Order: An index is designed to provide a quick lookup for data based on specific key values. Ideally, the data within the index pages should be stored in a contiguous, ordered manner based on these key values.
  • Physical Order: How pages are actually stored on disk.
When these orders are out of sync, SQL Server has to jump around more, increasing I/O and slowing queries.

Example scenario
  • You insert new rows into a table with a clustered index.
  • SQL Server tries to keep them in order by key.
  • If the target page is full, SQL Server splits the page into two, moving some rows to the new page.
  • Now the physical page order may no longer match the logical order.

Inseration
Internal vs External Fragmentation

There are two main types of fragmentation:

Internal Fragmentation
  • Occurs inside pages.
  • Pages are only partially filled, leaving empty space.
  • Often happens after deletes or updates that shrink data.
  • Leads to wasted space and more pages to scan.
External Fragmentation
  • Occurs when the order of pages no longer matches their logical sequence.
  • Caused by page splits that scatter data across the file.
  • Leads to more random I/O.
    Logical Order: Page 100 → Page 101 → Page 102
    Physical Order: Page 100 → Page 300 → Page 101 → Page 500 → Page 102
Why Does It Matter?
Fragmentation increases
  • I/O operations (more pages read)
  • CPU usage (more pointer lookups)
  • Query execution time
Especially in large datasets, even small increases in page reads can cause significant slowdowns.

4. How to Fix Fragmentation
REORGANIZE: Defragments the index by physically reordering leaf pages. Lightweight, online.
ALTER INDEX IndexName ON TableName REORGANIZE;

REBUILD: Drops and recreates the index. Removes all fragmentation. Can be offline or online (Enterprise Edition).
ALTER INDEX IndexName ON TableName REORGANIZE;

Important Note
Microsoft recommends:
<5% fragmentation → No action needed.
5–30% fragmentation → REORGANIZE.
>30% fragmentation → REBUILD.


By following these guidelines, you can keep your SQL Server indexes healthy and your queries fast.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: SQL Cheatsheet: A Comprehensive and Easy-to-Use Guide

clock August 8, 2025 08:27 by author Peter

Relational databases are accessed using SQL (Structured Query Language). It facilitates data storage, retrieval, updating, and deletion. Almost every business uses SQL, including banking systems and internet. The ability to use SQL is essential if you wish to work with data. For the most crucial SQL topics, this cheat sheet will provide you with concise and understandable definitions, code examples, and essential points.

1. SELECT Statement
Definition: Used to fetch data from a table.

Example
SELECT name, age FROM Students;

Key Point: Use * to select all columns.

2. WHERE Clause
Definition: Filters records based on a condition.

Example
SELECT * FROM Students WHERE age > 18;

Key Point: Works with operators like =, >, <, BETWEEN, IN, LIKE.

3. ORDER BY
Definition: Sorts the result in ascending or descending order.

Example
SELECT * FROM Students ORDER BY age DESC;

Key Point: Default sort is ascending (ASC).

4. INSERT INTO
Definition: Adds new records to a table.

Example
INSERT INTO Students (name, age) VALUES ('John', 20);

Key Point: Match the column order with the values.

5. UPDATE
Definition: Changes existing records.

Example
UPDATE Students SET age = 21 WHERE name = 'John';


Key Point: Always use WHERE to avoid updating all rows.

6. DELETE
Definition: Removes data from a table.

Example
DELETE FROM Students WHERE name = 'John';


Key Point: Without WHERE, all rows will be deleted.

7. CREATE TABLE
Definition: Creates a new table.
Example
CREATE TABLE Students (
  id INT PRIMARY KEY,
  name VARCHAR(50),
  age INT
);


Key Point: Define data types clearly.

8. ALTER TABLE
Definition: Changes the structure of a table.
Examples
    Add column:
    ALTER TABLE Students ADD email VARCHAR(100);

Drop column:
ALTER TABLE Students DROP COLUMN email;

9. DROP TABLE
Definition: Deletes the table and its data.

Example
DROP TABLE Students;

Key Point: This action cannot be undone.

10. JOINs
Definition: Combines rows from two or more tables.

Example (INNER JOIN)
SELECT Students.name, Marks.score
FROM Students
INNER JOIN Marks ON Students.id = Marks.student_id;

Types

    INNER JOIN: Only matching records.
    LEFT JOIN: All records from left + matches.
    RIGHT JOIN: All from right + matches.
    FULL JOIN: All records from both.

11. GROUP BY
Definition: Groups rows based on a column and applies an aggregate function.

Example:
SELECT age, COUNT(*) FROM Students GROUP BY age;

Key Point: Always use with aggregate functions.

12. HAVING
Definition: Filters groups created by GROUP BY.

Example:
SELECT age, COUNT(*)
FROM Students
GROUP BY age
HAVING COUNT(*) > 1;


Key Point: Use HAVING after GROUP BY.

13. Aggregate Functions
Definition: Perform calculations on multiple values.
Common ones: COUNT(), SUM(), AVG(), MIN(), MAX()

Example
SELECT AVG(age) FROM Students;

14. DISTINCT

Definition: Removes duplicate rows.

Example
SELECT DISTINCT age FROM Students;

15. BETWEEN, IN, LIKE
BETWEEN

SELECT * FROM Students WHERE age BETWEEN 18 AND 25;

IN
SELECT * FROM Students WHERE age IN (18, 21, 23);

LIKE
SELECT * FROM Students WHERE name LIKE 'A%';

16. LIMIT / TOP
Definition: Restricts number of rows returned.
SELECT * FROM Students LIMIT 5;

SELECT TOP 5 * FROM Students;

17. UNION
Definition: Combines results of two SELECT statements.

Example
SELECT name FROM Students
UNION
SELECT name FROM Teachers;


Key Point: Both queries must have the same number of columns.

18. Subquery
Definition: A query inside another query.

Example
SELECT name FROM Students
WHERE age = (SELECT MAX(age) FROM Students);


19. Views
Definition: A virtual table based on the result of a query.

Example
CREATE VIEW Teenagers AS
SELECT * FROM Students WHERE age BETWEEN 13 AND 19;

20. Indexes
Definition: Improve search performance on columns.

Example
CREATE INDEX idx_name ON Students(name);

Key Point: Indexes speed up SELECT but may slow down INSERT/UPDATE.

21. Constraints
Definition: Rules applied on columns.

Types
    PRIMARY KEY: Unique and not null.
    FOREIGN KEY: Links to another table’s primary key.
    UNIQUE: Ensures all values are different.
    NOT NULL: Prevents null values.
    CHECK: Validates a condition.


Example
CREATE TABLE Students (
  id INT PRIMARY KEY,
  age INT CHECK (age >= 0)
);

22. DEFAULT Constraint
Definition: Sets a default value for a column if none is provided.

Example
CREATE TABLE Students (
  id INT,
  name VARCHAR(50),
  status VARCHAR(10) DEFAULT 'active'
);

Key Point: Helps avoid NULL when no value is provided.

23. CASE Statement
Definition: Adds conditional logic inside a query.

Example
SELECT name,
       CASE
         WHEN age >= 18 THEN 'Adult'
         ELSE 'Minor'
       END AS age_group
FROM Students;


Key Point: Works like if-else in SQL.

24. EXISTS

Definition: Checks whether a subquery returns any row.

Example
SELECT name FROM Students s
WHERE EXISTS (
  SELECT 1 FROM Marks m WHERE m.student_id = s.id
);


Key Point: Faster than IN in many cases for large datasets.

25. NOT IN vs NOT EXISTS
Caution: NOT IN fails if any NULL exists in subquery. Use NOT EXISTS for safer logic.

Example
SELECT name FROM Students
WHERE NOT EXISTS (
  SELECT 1 FROM Graduates WHERE Students.id = Graduates.id
);


26. COALESCE and ISNULL
Definition: Replace NULL with a default value.

Example
SELECT name, COALESCE(email, 'Not Provided') FROM Students;

Key Point: COALESCE returns the first non-null value from a list.

27. Window Functions
Definition: Performs calculation across a set of rows related to the current row.
Common ones: ROW_NUMBER(), RANK(), DENSE_RANK(), NTILE()

Example
SELECT name, age,
       RANK() OVER (ORDER BY age DESC) AS age_rank
FROM Students;

Key Point: Unlike aggregate functions, window functions do not collapse rows.

28. Common Table Expressions (CTE)
Definition: A temporary named result set used in a query.

Example
WITH Teenagers AS (
  SELECT * FROM Students WHERE age BETWEEN 13 AND 19
)
SELECT * FROM Teenagers;


Key Point: Makes complex queries cleaner and readable.

29. Stored Procedures
Definition: A saved group of SQL statements that can be executed repeatedly.

Example
CREATE PROCEDURE GetAllStudents
AS
BEGIN
  SELECT * FROM Students;
END;


Execution
EXEC GetAllStudents;

30. Functions (User-Defined Functions)
Definition: Return a value based on input parameters.

Example
CREATE FUNCTION GetFullName (@first VARCHAR(50), @last VARCHAR(50))
RETURNS VARCHAR(100)
AS
BEGIN
  RETURN (@first + ' ' + @last);
END;

31. Transactions

Definition: A set of SQL operations that execute together or not at all.

Example
BEGIN TRANSACTION;
UPDATE Accounts SET balance = balance - 100 WHERE id = 1;
UPDATE Accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;


Key Point: Use ROLLBACK to undo changes if something fails.

32. ACID Properties
Definition: Ensures reliable processing of transactions.

  • Atomicity: All or nothing
  • Consistency: Valid state before/after
  • Isolation: Transactions do not affect each other
  • Durability: Changes are permanent

33. Normalization

  • Definition: Organizing data to reduce redundancy.
  • Common Forms: 1NF, 2NF, 3NF
  • Key Point: Normalize to avoid data anomalies. Denormalize only when performance requires it.

34. Foreign Key
Definition: A column that links to another table’s primary key.

Example

CREATE TABLE Marks (
  id INT PRIMARY KEY,
  student_id INT,
  FOREIGN KEY (student_id) REFERENCES Students(id)
);


35. Triggers

Definition: SQL code that runs automatically on data change (INSERT, UPDATE, DELETE).

Example
CREATE TRIGGER LogDelete
AFTER DELETE ON Students
FOR EACH ROW
BEGIN
  INSERT INTO DeletedRecords (name) VALUES (OLD.name);
END;


36. SET Operators (UNION ALL, INTERSECT, EXCEPT)
UNION ALL: Includes duplicates.
INTERSECT: Returns common rows between two queries.
EXCEPT: Returns rows in first query but not in second.


Example
SELECT name FROM Students
INTERSECT
SELECT name FROM Teachers;


37. Backup and Restore
Backup Example (SQL Server)
BACKUP DATABASE School TO DISK = 'D:\backup\school.bak';


Restore Example
RESTORE DATABASE School FROM DISK = 'D:\backup\school.bak';

38. TEMPORARY Tables
Definition: Temporary tables exist only during the session or transaction.

Example (MySQL)
CREATE TEMPORARY TABLE TempStudents (
  id INT,
  name VARCHAR(50)
);

Key Point: Good for storing intermediate results. Automatically dropped at session end.

39. TABLE Variables (SQL Server)
Definition: A variable to temporarily hold table data.

Example
DECLARE @Temp TABLE (id INT, name VARCHAR(50));
INSERT INTO @Temp VALUES (1, 'John');

40. MERGE Statement (UPSERT)
Definition: Performs INSERT, UPDATE, or DELETE in one statement based on a match.

Example
MERGE INTO Students AS target
USING (SELECT 1 AS id, 'John' AS name) AS source
ON target.id = source.id
WHEN MATCHED THEN
  UPDATE SET name = source.name
WHEN NOT MATCHED THEN
  INSERT (id, name) VALUES (source.id, source.name);


Key Point: Use cautiously. Complex logic can lead to bugs if not tested well.

41. Recursive CTE
Definition: A CTE that refers to itself. Useful for hierarchical data (e.g., org charts, folder trees).

Example
WITH OrgChart AS (
  SELECT id, manager_id, name FROM Employees WHERE manager_id IS NULL
  UNION ALL
  SELECT e.id, e.manager_id, e.name
  FROM Employees e
  JOIN OrgChart o ON e.manager_id = o.id
)
SELECT * FROM OrgChart;


42. JSON Data Handling
Definition: Querying and storing JSON in SQL columns (supported in PostgreSQL, MySQL, SQL Server).

Example (PostgreSQL)
SELECT data->>'name' AS name FROM users WHERE data->>'age' = '25';

Key Point: Useful when data structure is flexible or semi-structured.

43. PIVOT and UNPIVOT

Definition
    PIVOT: Converts rows to columns.
    UNPIVOT: Converts columns to rows.

Example (SQL Server):
SELECT * FROM
(SELECT subject, score FROM Marks) AS SourceTable
PIVOT (
  MAX(score) FOR subject IN ([Math], [Science])
) AS PivotTable;

Key Point: Makes reporting easier. Avoid overuse in core logic.

44. ROLLUP and CUBE
Definition: Extensions to GROUP BY for creating subtotals and grand totals.

ROLLUP
SELECT department, role, COUNT(*)
FROM Employees
GROUP BY ROLLUP (department, role);

CUBE
GROUP BY CUBE (department, role);

Key Point: Saves time when generating hierarchical reports.

45. WITH TIES
Definition: Returns additional rows that match the last value in an ORDER BY ... TOP query.

Example:
SELECT TOP 3 WITH TIES name, score
FROM Marks
ORDER BY score DESC;

Key Point: Useful when ranks are tied and you want to return all top scorers.

46. SEQUENCES
Definition: Auto-incrementing number generator independent of tables.

Example:
CREATE SEQUENCE student_seq START WITH 1 INCREMENT BY 1;
SELECT NEXT VALUE FOR student_seq;

Key Point: More flexible than IDENTITY in some databases.

47. IDENTITY vs SEQUENCE
IDENTITY: Tied to a table. Auto-increments per insert.
SEQUENCE: Independent object. Can be reused across tables or manually advanced.

48. LOCKING & ISOLATION LEVELS
Definition: Controls how transactions read/write shared data.

Levels

  • READ UNCOMMITTED: Dirty reads allowed
  • READ COMMITTED: Only committed data
  • REPEATABLE READ: Prevents non-repeatable reads
  • SERIALIZABLE: Strictest, full isolation

Key Point: Choose based on consistency vs performance trade-off.

49. DEADLOCK

  • Definition: Two or more sessions wait forever for resources locked by each other.
  • Fix: Reduce transaction size, always lock objects in the same order, use proper isolation levels.

50. EXECUTION PLAN

  • Definition: Visual or textual explanation of how SQL Server/MySQL/PostgreSQL will execute the query.
  • Key Point: Use EXPLAIN, SHOW PLAN, or right-click → “Display Estimated Execution Plan” in SQL Server Management Studio.

51. ANALYZE / UPDATE STATISTICS
Definition: Refreshes metadata so the query planner makes better decisions.

Example (PostgreSQL):
ANALYZE Students;

SQL Server:
UPDATE STATISTICS Students;

52. PARTITIONING

  • Definition: Divides large tables into smaller, more manageable parts (partitions).
  • Key Point: Boosts performance on very large tables. Query optimizer uses partition elimination.

53. Sharding

  • Definition: Horizontal partitioning across databases or servers.
  • Key Point: Needed for very high scale. Not supported by default in many RDBMS — requires custom implementation or external tools.

54. Temporal Tables (System-Versioned Tables)
Definition: Track historical changes automatically.

Example (SQL Server 2016+):
CREATE TABLE StudentsHistory (
  id INT,
  name VARCHAR(50),
  VALID_FROM DATETIME2 GENERATED ALWAYS AS ROW START,
  VALID_TO DATETIME2 GENERATED ALWAYS AS ROW END,
  PERIOD FOR SYSTEM_TIME (VALID_FROM, VALID_TO)
) WITH (SYSTEM_VERSIONING = ON);

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: Track Performance with the Query Store

clock August 5, 2025 08:54 by author Peter

The Query Store: What is it?
With SQL Server 2016, a performance monitoring tool called the Query Store was added. It records query execution history, query plans, and performance metrics over time, much like a black box recorder. Because Query Store retains historical data, it is simpler to troubleshoot performance issues that have already occurred, in contrast to standard DMVs that only provide live statistics.

Why Use Query Store?

  • Monitor how query performance changes over time.
  • Detect and troubleshoot query regressions (when performance drops).
  • Compare performance before and after deployments.
  • Identify and force optimal plans for queries.
  • Track resource usage (CPU, IO, duration) per query.

How to Enable Query Store?
Option 1. T-SQL

ALTER DATABASE [YourDatabaseName]
SET QUERY_STORE = ON;

Option 2. SSMS GUI

  • Right-click the database → Properties
  • Navigate to the Query Store page
  • Set Operation Mode (Requested) to Read Write
  • Adjust other settings like capture mode and retention as needed


What Data does Query Store Capture?

Category Description
Query Text The full T-SQL text of the executed query
Query Plan The execution plan used to execute the query
Runtime Stats CPU time, duration, reads/writes, executions, etc.
Wait Stats (SQL 2017+) Time spent waiting on resources like locks or memory
Plan Forcing Force SQL Server to always use a specific plan

Internal Views for Query Store Analysis

  • sys.query_store_query: Stores each normalized query
  • sys.query_store_plan: Stores query execution plans
  • sys.query_store_runtime_stats: Stores execution metrics
  • sys.query_store_wait_stats: Captures wait types per query (SQL 2017+)

Built-in SSMS Reports
SSMS provides graphical reports for easier analysis.

  • Right-click the database
  • Select Reports → Standard Reports → Query Store
  • Explore reports like.
    • Top Resource Consuming Queries
    • Tracked Queries
    • Query Plan Changes
    • Regressed Queries

Plan Forcing Example
If SQL Server chooses a suboptimal plan, you can force a better one.

Steps to Force a Plan via T-SQL

EXEC sp_query_store_force_plan
    @query_id = 102,
    @plan_id = 301;


To Unforce a Plan
EXEC sp_query_store_unforce_plan
    @query_id = 102,
    @plan_id = 301;


Note: Forced plans remain in use until manually unforced or if the plan becomes invalid due to schema changes.

How to Clean Query Store?
Remove Specific Plan or Query

EXEC sp_query_store_remove_plan @plan_id = 123;
EXEC sp_query_store_remove_query @query_id = 456;


Clear All Data from Query Store
ALTER DATABASE [YourDatabaseName]
SET QUERY_STORE CLEAR;

Query Store Modes

Mode Description
Off Query Store is disabled
Read Only Existing data is visible but not updated
Read Write Captures and stores new execution data

Limitations and Considerations

  • Slight overhead on heavily loaded systems (~1–5%)
  • Disk space usage can grow—monitor data size regularly
  • Not available in SQL Server 2014 or earlier
  • Retention settings control how long historical data is kept

Conclusion
The Query Store is a powerful feature for any SQL Server environment focused on performance, stability, and transparency. It simplifies identifying performance problems and helps maintain consistent performance by allowing query plan control. Use it proactively in development, staging, and production environments to avoid surprises and ensure optimal performance.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: How to Create a Data Mart from the Ground Up: A Guide for BI Teams Focused on Data Warehouses and SQL

clock August 1, 2025 08:01 by author Peter

Specifically, we will leverage data warehouse ideas, SQL methodologies, and BI-focused schema architecture to design and implement a Data Mart in this article.

Step 1. Understand Business Requirements for the Data Warehouse

  • Engage business stakeholders to define reporting needs.
  • Identify key facts (e.g., total sales, loan balance) and dimensions (e.g., product, branch, customer).
  • Define the grain of the data mart (e.g., "daily branch sales").

Step 2. Design the Data Warehouse Schema (Dimensional Modeling)
Use a Star Schema or Snowflake Schema, optimized for SQL queries.

  • Fact Table (e.g., Fact_Sales): Contains numeric metrics and foreign keys
  • Dimension Tables (e.g., Dim_Customer, Dim_Product): Contain descriptive attributes to slice and filter data

Step 3. Source to Target Mapping (STM)

  • Identify source systems (e.g., OLTP DBs like Core Banking, ERP).
  • Document field-level mappings from source to warehouse tables.
  • Define SQL-based transformations (e.g., date format conversion, category lookup).

Step 4. Build the ETL in SQL or ELT with dbt
Using SQL or dbt (Data Build Tool).

  • Extract: Pull raw data from staging tables.
  • Transform: Use SQL CTEs or dbt models to clean and join.
  • Load: Insert transformed data into fact/dimension tables.

Tips

  • Use SCD Type 2 for dimensions
  • Maintain surrogate keys
  • Use indexes and partitions for performance

Step 5. Connect to BI Tools

  • Expose the SQL-based data mart to Power BI, Tableau, or Looker.
  • Use pre-aggregated views for performance.
  • Provide semantic consistency by using dimensional hierarchies.

Step 6. Testing and Optimization

  • Write SQL test cases for row counts, null checks, and data accuracy.
  • Schedule SQL scripts or dbt jobs for refresh.
  • Monitor performance with query logs or warehouse query history.

Conclusion
Building a Data Mart using SQL and data warehousing best practices is essential for scalable and high-performing BI. With the right design and automation in tools like dbt or SQL scripts, your organization can gain accurate, fast insights. In the next article, we’ll walk through building a Sales Data Mart in SQL + dbt with dimensional modeling and snapshotting logic.

HostForLIFEASP.NET SQL Server 2022 Hosting



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in