European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

European SQL Server 2022 Hosting :: How to Improve SQL Queries to Run More Quickly in Big Databases?

clock September 25, 2025 09:06 by author Peter

The Significance of Optimizing SQL Queries
Slow SQL queries have the potential to affect your entire system when working with big databases that contain millions of information. In addition to taking longer to respond, poorly worded queries use more CPU, memory, and storage space. Query optimization guarantees speedy processing of reports, queries, and transactions in applications such as e-commerce, banking, analytics, and log management systems.

Making your queries smarter rather than heavier is the goal of query optimization. You may significantly cut down on execution time and increase scalability by utilizing indexing, effective filtering, caching, and partitioning.

Use Indexes to Speed Up Searches
Indexes are like the index of a book – instead of flipping through every page, the database can jump directly to the relevant section.

  • Create indexes on columns frequently used in WHERE, JOIN, and ORDER BY.
  • Use composite indexes when queries filter by multiple columns.
  • Avoid over-indexing, as too many indexes can slow down INSERT and UPDATE operations.

-- Adding index for faster lookupsCREATE INDEX idx_customers_email ON customers(email);

Indexes can reduce query time from several seconds to milliseconds in large databases.

Select Only the Columns You Need

Using SELECT * is one of the most common mistakes in SQL. It retrieves all columns, even when you need only a few. This adds unnecessary I/O and slows down queries.
-- Bad: retrieves everythingSELECT * FROM orders;

-- Good: retrieves only needed dataSELECT order_id, customer_id, total_amount FROM orders;


This practice is especially important in wide tables with dozens of columns.

Write Efficient Joins

Joins are powerful but can be costly if not written carefully.

  • Ensure the join columns are indexed.
  • Use INNER JOIN instead of LEFT JOIN when you only need matching rows.
  • Avoid redundant joins if the data can be obtained from a single table.

SELECT c.name, o.order_id
FROM customers c
INNER JOIN orders o ON c.customer_id = o.customer_id;


Efficient joins prevent full table scans and make queries scale better in large datasets.

Filter Data Properly with WHERE Clauses

Well-structured WHERE clauses help the database use indexes effectively.

  • Avoid applying functions directly to indexed columns.
  • Rewrite queries using date ranges or numeric ranges instead of transformations.


-- Bad: prevents index usageSELECT * FROM sales WHERE YEAR(sale_date) = 2025;

-- Good: uses index efficientlySELECT * FROM sales
WHERE sale_date >= '2025-01-01' AND sale_date < '2026-01-01';


This approach drastically reduces execution time when working with time-based queries.

Limit the Number of Rows You Retrieve

Fetching millions of rows when you only need the latest 100 records wastes time and resources. Always use LIMIT or TOP.
SELECT * FROM logs ORDER BY log_time DESC LIMIT 100;

This is crucial for dashboards, reports, and log systems that only display recent activity.

Check and Understand Execution Plans
Execution plans show how the database engine processes a query. By analyzing them, you can find bottlenecks.

  • Use EXPLAIN in MySQL/PostgreSQL or SET SHOWPLAN_ALL ON in SQL Server.
  • Watch for full table scans, which indicate the query is ignoring indexes.
  • Optimize queries so the database performs index seeks instead of scans.

EXPLAIN SELECT * FROM orders WHERE customer_id = 1001;

Execution plans are your best tool for diagnosing and fixing slow queries.

Use Partitioning for Very Large Tables
Partitioning splits a huge table into smaller, more manageable parts. This way, queries only scan the relevant partition instead of the whole dataset.
Example: Partition a sales table by year. Queries that fetch 2025 sales only look at the 2025 partition, reducing execution time dramatically.

Cache Expensive Queries
If your query is frequently executed and rarely changes, caching can save time.

  • Use application-level caching with Redis or Memcached.
  • Use materialized views (in PostgreSQL, Oracle, etc.) for pre-computed results.

Caching reduces repeated execution of heavy queries and improves response times.

Rewrite Subqueries for Better Performance

Subqueries inside IN clauses or correlated subqueries often slow queries down. Rewriting them with EXISTS or JOIN usually helps.
-- Slow versionSELECT name FROM customers
WHERE customer_id IN (SELECT customer_id FROM orders);

-- Optimized versionSELECT name FROM customers c
WHERE EXISTS (SELECT 1 FROM orders o WHERE o.customer_id = c.customer_id);


This improves performance in large databases with millions of rows.

Insert and Update in Batches

Single-row inserts and updates can be slow. Batch operations reduce overhead by grouping multiple records into a single query.
-- Slow: multiple insertsINSERT INTO sales VALUES (1, '2025-09-01', 100);
INSERT INTO sales VALUES (2, '2025-09-02', 200);

-- Fast: batch insertINSERT INTO sales (id, sale_date, amount)
VALUES
(1, '2025-09-01', 100),
(2, '2025-09-02', 200);


This is essential for ETL pipelines and bulk data processing.

Real-World Case Study: Before vs After Optimization
Scenario

An e-commerce platform runs a query on a table with 50 million orders. The original query took 12 seconds to complete.

Original Query (Slow)

SELECT *FROM orders
WHERE YEAR(order_date) = 2025AND customer_email = '[email protected]'ORDER BY order_date DESC;

  • Used SELECT * → fetched all columns unnecessarily.
  • Used YEAR(order_date) → prevented index usage.
  • No index on customer_email → forced full table scan.

Execution Plan: Scanned all 50M rows.

Performance: ~12 seconds.

Optimized Query (Fast)

-- Create composite indexCREATE INDEX idx_orders_email_date
ON orders (customer_email, order_date);

-- Optimized querySELECT order_id, order_date, total_amount
FROM orders
WHERE customer_email = '[email protected]'AND order_date >= '2025-01-01'AND order_date < '2026-01-01'ORDER BY order_date DESC
LIMIT 100;


Selected only the necessary columns.

  • Replaced YEAR() with a date range to keep the index usable.
  • Created a composite index on (customer_email, order_date).
  • Added LIMIT for pagination.

Before vs After Comparison

FactorBeforeAfter

Query Time

~12s

~0.15s

Rows Scanned

50M

~2,500

CPU Usage

High

Low

User Experience

Slow

Instant

Visual Flow of SQL Optimization

flowchart TD
    A[Slow Query Detected] --> B[Check Execution Plan]
    B --> C[Add or Adjust Indexes]
    C --> D[Rewrite SELECT, JOIN, WHERE Clauses]
    D --> E[Apply LIMIT or Pagination]
    E --> F[Use Partitioning or Caching]
    F --> G[Measure Performance Again]

Summary
Optimizing SQL queries is essential when working with large databases. Small improvements such as using indexes, avoiding SELECT *, rewriting WHERE clauses, limiting results, and analyzing execution plans can cut query times from seconds to milliseconds. For very large datasets, partitioning, caching, and batch operations make queries even faster. The case study showed how a 12-second query was reduced to 0.15 seconds with proper optimization, proving that smart SQL tuning leads to better performance, efficient resource use, and improved user experience.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



SQL Server Hosting - HostForLIFE :: Check Constraint in SQL Server

clock September 18, 2025 10:05 by author Peter

In this article I describe Check Constraints on a table, how to add a Check Constraint to a table, how to remove a Check Constraint from a table, and the limitations of a Check Constraint. Before explaining Check Constraints it is necessary to first explain what Domain Integrity is.

Domain Integrity
Domain Integrity ensures that values are valid for columns and prevents invalid values for columns within a database.

Check Constraint

A Check Constraint is a rule that identifies valid values for columns of data. A Check Constraint helps to enforce Domain Integrity. If the condition in a Check Constraint is not satisfied then it prevents the value from entering into the database.

Syntax
CREATE TABLE tableName (
    Column1 dataType CHECK (expression),
    Column2 dataType,
    ColumnN dataType
);

Example
CREATE TABLE emp (
    empId INT CHECK (empId > 10),
    empName VARCHAR(15)
);

Output

If we want to insert a record with less than 10 IDs then it shows the error.
INSERT INTO emp (empId, empName)
VALUES (8, 'd');


Output

Dropping the Check Constraint
First of all, we can determine the name of the constraint using the following command.
EXEC sp_help emp;

Output

Now execute the following command.
ALTER TABLE emp
DROP CONSTRAINT CK__emp__empId__1A14E395;


Output

Adding the Check Constraint
ALTER TABLE emp
ADD CHECK (empId > 15);


Output

Limitation
The Check Constraint rejects the values that are invalid or we can say which does not satisfy the Check Conditions. But in the case of a null, a Check Constraint will allow it to be insert into the database.

Insertion of Null value
INSERT INTO emp
VALUES (NULL, 'd');

Output

Summary
In this article, I described a Check Constraint in SQL Server. I hope this article has helped you to understand this topic. Please share if you know more about this. Your feedback and constructive contributions are welcome.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: Developing, Protecting, and Improving SQL Server: A Practical Example of E-Commerce

clock September 9, 2025 06:54 by author Peter

1. Design of Schemas
Essential Ideas

  • To cut down on redundancy, normalize
  • Employ restrictions and foreign keys.
  • For auditing, keep track of timestamps.

Schema Diagram

Customers ───< Orders ───< OrderItems >─── Products

Example Table: Customers

ColumnTypeNotes
CustomerID INT (PK) Identity
FirstName NVARCHAR(50) Not Null
LastName NVARCHAR(50) Not Null
Email NVARCHAR(100) Unique, Not Null
CreatedAt DATETIME2 Default SYSUTCDATETIME()

Checklist

  • Primary & foreign keys defined
  • Appropriate data types
  • Default timestamps added

2. Indexing & Query Tuning
Scenario: Calculate total spending per customer.

Query

SELECT c.FirstName, c.LastName, SUM(oi.Quantity * oi.Price) AS TotalSpent
FROM Customers c
JOIN Orders o ON c.CustomerID = o.CustomerID
JOIN OrderItems oi ON o.OrderID = oi.OrderID
WHERE o.OrderDate >= '2025-01-01'GROUP BY c.FirstName, c.LastName
ORDER BY TotalSpent DESC;

Optimization: Add covering indexes to speed up joins.

TableIndexPurpose

Orders

(CustomerID, OrderDate)

Filter & join optimization

OrderItems

(OrderID, ProductID, Quantity, Price)

Aggregate faster

3. ETL: Efficient Data Loading

Scenario: Update daily product prices from CSV.

Process Diagram
CSV File → Staging Table → Merge → Products Table

Example SQL
BULK INSERT ProductStaging FROM 'C:\Data\ProductPrices.csv'
WITH (FIRSTROW = 2, FIELDTERMINATOR = ',', ROWTERMINATOR = '\n');

MERGE Products AS target
USING ProductStaging AS source
ON target.ProductID = source.ProductID
WHEN MATCHED THEN UPDATE SET Price = source.Price
WHEN NOT MATCHED BY TARGET THEN INSERT (Name, Price, Stock) VALUES (source.Name, source.Price, source.Stock);


Checklist

  • Validate incoming data
  • Only update changed records
  • Log ETL results

4. Backup & Recovery
Backup Strategy Diagram

Full Backup → Differential Backup → Transaction Log Backups

Example SQL
BACKUP DATABASE ECommerceDB TO DISK = 'C:\Backups\ECommerceDB_FULL.bak' WITH INIT, COMPRESSION;
BACKUP LOG ECommerceDB TO DISK = 'C:\Backups\ECommerceDB_Log.trn';

Checklist

  • Full backups scheduled
  • Transaction logs backed up for minimal data loss
  • Test restores regularly

5. Security: Controlling Access

Role-Based Access Table

RoleAccess TypeNotes
AppUser Read/Write Application use only
Analyst Read-Only Reporting and analytics only
Admin Full DBAs only

SQL Example
CREATE ROLE ReadOnlyRole;
ALTER ROLE ReadOnlyRole ADD MEMBER [ReportingUser];
DENY INSERT, UPDATE, DELETE TO ReadOnlyRole;

6. Monitoring & Maintenance
Maintenance Flow Diagram
Index Rebuild → Statistics Update → Query Performance Monitoring → Alerts

Example SQL for Long-Running Queries
SELECT TOP 10
    qs.total_elapsed_time/1000 AS TotalMs,
    qs.execution_count,
    SUBSTRING(qt.text, qs.statement_start_offset/2,
        (CASE WHEN qs.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2 ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) AS QueryText
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
ORDER BY TotalMs DESC;


Checklist

  • Monitor slow queries & deadlocks
  • Rebuild fragmented indexes
  • Update statistics regularly
  • Set alerts for critical issues

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: SQL Server Statistics Update

clock September 3, 2025 08:26 by author Peter

The query optimizer, one of SQL Server's primary features, employs statistics to ascertain the query's most effective execution strategy. Statistics are used to calculate the most effective access path to the data and to estimate how many rows a query will return. In order to give the query optimizer the most recent data possible for precise estimations, we will go over how to update statistics in SQL Server in this post.

What are Statistics in SQL Server?
Update Statistics is a command or process in SQL Server that updates the statistical information about the distribution of data in one or more tables or indexes in a database. They consist of summary data that allows the query optimizer to make informed decisions about how to retrieve data efficiently. These statistics help SQL Server determine the most optimal query execution plans, which ultimately impacts query performance.

By default, SQL Server automatically updates statistics when a certain threshold of data changes has been reached. This threshold is known as the “auto-update” threshold. However, there are situations where you may want to update statistics manually to ensure that the query optimizer has the most accurate information to work with.

Why are Statistics Important?
Accurate statistics are critical for SQL Server to generate the most efficient execution plan for a query. If the statistics are outdated or inaccurate, the query optimizer may choose a suboptimal execution plan, resulting in slower query performance. Additionally, inaccurate statistics can lead to suboptimal index usage, which can cause excessive disk I/O and memory usage.

When to Update Statistics in SQL Server?
While SQL Server automatically updates statistics for you, there are certain scenarios where you may want to update statistics manually. These scenarios include:

  • Large Data Changes: If a large percentage of data has been added, modified, or deleted from a table, you may want to update statistics manually to ensure that the query optimizer has the most accurate information.
  • Stale Statistics: If the statistics for a table or indexed view are stale (out of date), you may want to update them manually to ensure that the query optimizer has the most accurate information. This can happen when data is added, modified, or deleted from a table or indexed view after the statistics were last updated.
  • Performance Issues: If you are experiencing performance issues with a particular query, updating statistics may help to improve performance by providing the query optimizer with more accurate information.

How to Update Statistics in SQL Server SQL Server?
Automatic Updates - SQL Server automatically updates statistics when a threshold of data modifications occurs, or when the query optimizer determines that the statistics are out of date. This is the default behavior for SQL Server, and it works well for most databases.
Manual Updates - We can manually update statistics using the UPDATE STATISTICS statement.

Update the statistics for a single table or indexed view using the following command:
UPDATE STATISTICS [table_name]

Where [table_name] is the name of the table or indexed view for which statistics to be updated.

Update the statistics for all tables in a database using the following command:
EXEC sp_updatestats

When statistics getting updated, SQL Server automatically updates the statistics for all columns that have a density value that is out of date. This means that the update may not necessarily update all statistics for all columns in the table or indexed view.

It's important to note that updating statistics can be a resource-intensive operation, particularly for large tables. Therefore, it's important to carefully consider the frequency and timing of statistics updates to ensure they don't negatively impact system performance.
How to view the statistics property?

By using the below T-SQL command:
DBCC SHOW_STATISTICS

For more information, see DBCC SHOW_STATISTICS

We can use ‘sys.stats’ system catalog view, which provides information about statistics for all user-defined and internal tables and indexes in the current database by using following command:
SELECT
    OBJECT_NAME(s.object_id) AS TableName,
    i.name AS IndexName,
    s.name AS StatisticName,
    s.auto_created AS IsAutoCreated,
    s.user_created AS IsUserCreated,
    STATS_DATE(s.object_id, s.stats_id) AS LastUpdated
FROM sys.stats AS s
INNER JOIN sys.indexes AS i
ON s.object_id = i.object_id
  AND s.stats_id = i.index_id
WHERE OBJECTPROPERTY(s.object_id, 'IsUserTable') = 1
ORDER BY TableName, IndexName

This statement retrieves information about statistics for all user-defined tables and indexes in the database, including the table and index names, statistic name, whether the statistic was automatically or manually created, and the date and time the statistics were last updated.

Examine the “LastUpdated” column to determine whether the statistics are up-to-date. If the value in this column is relatively recent, the statistics are up-to-date. However, if the value is old or NULL, the statistics may be out-of-date and need to be updated.

Conclusion
Statistics are an important component of SQL Server's query optimization process. By providing the query optimizer with accurate information about the distribution of data in a table or indexed view, statistics can help to improve query performance. While SQL Server automatically updates statistics for you, there are certain scenarios where you may want to update them manually to ensure that the query optimizer has the most accurate information. By understanding when and how to update statistics in SQL Server, you can help to ensure that your queries are running as efficiently as possible.

HostForLIFEASP.NET SQL Server 2022 Hosting



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in