European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

European SQL Server 2022 Hosting :: In MSSQL, Truncate All Table Data

clock January 3, 2024 06:36 by author Peter

Dynamic SQL can be used to truncate every table in a database. When performing this kind of operation, use caution because there is no way to undo the deletion of all data across all tables.

This is an illustration of how to write a query in a SQL Server database to truncate every table.

DECLARE @TableName NVARCHAR(128)
DECLARE @TruncateQuery NVARCHAR(MAX)

DECLARE tableCursor CURSOR FOR
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'

OPEN tableCursor

FETCH NEXT FROM tableCursor INTO @TableName

WHILE @@FETCH_STATUS = 0
BEGIN
    SET @TruncateQuery = 'TRUNCATE TABLE ' + @TableName
    EXEC sp_executesql @TruncateQuery

    FETCH NEXT FROM tableCursor INTO @TableName
END

CLOSE tableCursor
DEALLOCATE tableCursor

Kindly take note of

  • This operation carries a risk: When tables are truncated, all data is lost and cannot be recovered.
  • Make a backup of your data: It's imperative to have a backup of your data in case the original is needed before running any queries that alter data in this manner.
  • Examine and make sure it passes: Before running such queries in a production environment, always properly evaluate and test in a secure setting.

Before carrying out such operations, it's also imperative to take permissions and the effect of truncating tables in a live system into account.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: Finding and Dropping a User-Defined Store Procedure Query

clock December 21, 2023 09:23 by author Peter

This little piece of code dumps each user-defined stored procedure one by one as iterates through them all (system procedures excluded). Note: You should not run the attached code snippet in production environments as it is a risky script. Every user-defined stored procedure in the database is iterated through and dropped one at a time. Serious repercussions may result from this, particularly if your database contains mission-critical processes.

declare @procName varchar(500)
declare cur cursor

for
select [name] from sys.objects
 where type = 'p' and  is_ms_shipped = 0

open cur
fetch next from cur into @procName
while @@fetch_status = 0
begin
    exec('drop procedure [' + @procName + ']')
    fetch next from cur into @procName
end
close cur
deallocate cur

Understanding the Code
The provided code demonstrates dropping user-defined stored procedures in a database. Here's a breakdown:

Variable declaration:

  • @procName: String variable to store the name of each procedure.
  • cur: Cursor object to iterate through existing procedures.

Cursor definition:

  • Uses sys.objects system table to find user-defined procedures (type = 'p') excluding system procedures (is_ms_shipped = 0).

Looping through procedures:

  • fetch next retrieves the next procedure's name into @procName.
  • while @@fetch_status = 0 repeats the loop until no more procedures are found.

Dropping procedures:

  • Constructs a dynamic SQL statement (exec) to drop the current procedure (@procName).
  • Executes the dynamic statement.

Cleanup:

  • Closes and deallocates the cursor to free resources.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: Which SQL Window methods are ROW_NUMBER(), Rank(), and DENSE_RANK()?

clock December 12, 2023 07:15 by author Peter

Within the ever-changing SQL Server environment, Windows functions are particularly potent instruments for analytical processing. They provide a unique method for carrying out calculations over a certain range of rows that are connected to the current row. In this blog, we'll explore Window Functions through a number of scenarios, a basic table example, and the use of normalizing techniques.

Knowing How Windows Work
Without lowering the result set, window functions work inside a given window of rows that are connected to the current row. They provide SQL queries access to new levels of analytical power. Let's begin by making a straightforward table and adding data to it.

-- Create a simple table
CREATE TABLE Sales (
    SaleID INT PRIMARY KEY,
    ProductID INT,
    SaleDate DATE,
    Amount DECIMAL(10, 2)
);

-- Insert sample data
INSERT INTO Sales VALUES (1, 101, '2023-01-01', 1500.00);
INSERT INTO Sales VALUES (2, 102, '2023-01-02', 2000.00);
INSERT INTO Sales VALUES (3, 101, '2023-01-03', 1200.00);
INSERT INTO Sales VALUES (4, 102, '2023-01-04', 1800.00);

ROW_NUMBER() - Enumerating Rows

The ROW_NUMBER() function assigns a unique number to each row based on a specified order. This can be useful for pagination or ranking purposes.

-- Enumerate rows based on SaleDate
SELECT SaleID, SaleDate, Amount,
       ROW_NUMBER() OVER (ORDER BY SaleDate) AS RowNum
FROM Sales;

RANK() - Ranking Rows
RANK() assigns a unique rank to each distinct row, leaving gaps for tied values.
-- Rank rows based on Amount
SELECT SaleID, SaleDate, Amount,
       RANK() OVER (ORDER BY Amount DESC) AS SalesRank
FROM Sales;

DENSE_RANK() - Dense Ranking Rows
DENSE_RANK() is similar to RANK(), but without gaps for tied values.

-- Dense rank rows based on Amount
SELECT SaleID, SaleDate, Amount,
       DENSE_RANK() OVER (ORDER BY Amount DESC) AS DenseSalesRank
FROM Sales;

In this specific dataset, since there are no tied values in the Amount column, the rankings provided by both RANK() and DENSE_RANK() are identical.
Here's a brief explanation of the differences:
RANK()

  • Assigns a unique rank to each distinct row.
  • Leaves gaps in the ranking for tied values. If two rows have the same value, they both get the same rank, and the next rank is skipped.


DENSE_RANK()

  • Similar to RANK().
  • Does not leave gaps for tied values. If two rows have the same value, they both get the same rank, and the next rank is not skipped.

SUM() - Cumulative Sum
SUM() as a window function enables the calculation of cumulative sums.

-- Calculate cumulative sum of Amount
SELECT SaleID, SaleDate, Amount,
       SUM(Amount) OVER (ORDER BY SaleDate) AS CumulativeSum
FROM Sales;

AVG() - Moving Average
AVG() as a window function calculates a moving average over a specified window of rows.
-- Calculate 3-day moving average of Amount
SELECT SaleID, SaleDate, Amount,
       AVG(Amount) OVER (ORDER BY SaleDate ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) AS MovingAverage
FROM Sales;

LEAD() and LAG() - Accessing Adjacent Rows
LEAD() and LAG() provide access to subsequent and preceding rows, respectively.
-- Access next and previous SaleDate
SELECT SaleID, SaleDate, Amount,
       LEAD(SaleDate) OVER (ORDER BY SaleDate) AS NextSaleDate,
       LAG(SaleDate) OVER (ORDER BY SaleDate) AS PreviousSaleDate
FROM Sales;


Normalizing Data Using Window Functions
Normalization is a key database design principle. Let's use Window Functions to normalize data by creating a new table to store aggregated information.
-- Create a normalized table
CREATE TABLE ProductSales (
    ProductID INT PRIMARY KEY,
    TotalSales DECIMAL(10, 2),
    AverageSaleAmount DECIMAL(10, 2)
);

-- Insert normalized data using Window Functions
INSERT INTO ProductSales
SELECT ProductID,
       SUM(Amount) OVER (PARTITION BY ProductID) AS TotalSales,
       AVG(Amount) OVER (PARTITION BY ProductID) AS AverageSaleAmount
FROM Sales
GROUP BY ProductID;


ProductSales

The ProductSales table in this normalized database removes unnecessary data by capturing the total sales and average selling amount for each product.

SQL Server Window Functions are a developer's and data analyst's gold mine. These routines improve your analytical skills, from ordering rows to computing cumulative sums and moving averages. Gaining proficiency with Windows functions will enable you to extract meaningful insights from your data and optimize the performance of your SQL queries. They can also be crucial in normalizing data, as shown in the aforementioned instances, which helps create an effective and well-designed database structure. Cheers to your inquiring!

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: Effective SQL Server Data Fetching Methods

clock December 5, 2023 11:03 by author Peter

Overview
Optimizing data retrieval is critical to database management in order to guarantee effective application performance. Microsoft's SQL Server is a powerful relational database management system that provides a number of methods for efficiently retrieving data while taking into account variables like query complexity, data volume, and system resources. This article explores the practical applications of several effective SQL Server data fetching techniques.
Comprehending Query Optimization in SQL Server

The basics of SQL Server query optimization must be understood before delving into specific strategies. A query optimizer in SQL Server is in charge of figuring out the best way to execute a particular query. This decision-making process is influenced by variables such as statistics, query structure, indexes, and system resources that are available.

1. Indexing Techniques

In order to maximize data retrieval, indexing is essential. By establishing a structured path to find information within tables, indexes speed up data access. There are three different types of indexes available in SQL Server: filtered, non-clustered, and clustered. It is essential to comprehend query patterns and data characteristics in order to choose the best indexing strategies.

Example SQL Code
CREATE NONCLUSTERED INDEX IX_Employee_DepartmentID
ON Employee (DepartmentID);

SELECT EmployeeID, Name, DepartmentID
FROM Employee
WHERE DepartmentID = 5;

2. Analysis of Query Plans and Statistics
The query optimizer in SQL Server uses statistics to calculate how many rows a query will affect. Updating statistics guarantees that the optimizer creates precise plans for query execution. Using programs like SQL Server Management Studio (SSMS) to analyze query plans makes it easier to spot possible bottlenecks and optimize queries for better performance.

Methods for Effectively Fetching Data
1. Index-Based Query Optimization

Making efficient use of indexes is essential to maximizing data retrieval. Query performance is greatly improved by using the right indexes depending on join conditions, sorting requirements, and query predicates. For example, building covering indexes with all the columns needed for a query can reduce I/O operations and improve performance by removing the need to access the actual data pages.

2. Optimizing Queries through Query Rewriting and Refactoring
Significant performance gains can frequently be achieved by rewriting and reorganizing queries. Query execution can be optimized by employing strategies such as splitting up complex queries into simpler ones, using derived tables or common table expressions (CTEs), and reducing the usage of functions inside predicates. Using SQL Server's query hints, like 'OPTIMIZE FOR' and 'FORCESEEK,' can also direct the query optimizer to more effective execution strategies.

SQL Code Example
SELECT OrderID, ProductID, Quantity
FROM Orders
WHERE OrderID IN (
    SELECT OrderID
    FROM OrderDetails
    WHERE UnitPrice > 50
);

SELECT o.OrderID, od.ProductID, od.Quantity
FROM Orders o
JOIN OrderDetails od ON o.OrderID = od.OrderID
WHERE od.UnitPrice > 50;

3. Efficient Use of Joins and Subqueries
Carefully crafting join operations and subqueries can significantly impact query performance. Opting for appropriate join types (e.g., INNER, OUTER, CROSS joins) based on the relationships between tables and using EXISTS or IN clauses efficiently can prevent unnecessary data retrieval, thereby enhancing query efficiency.

SQL Code Example
SELECT Name
FROM Employees e
WHERE EXISTS (
    SELECT 1
    FROM Orders o
    WHERE o.EmployeeID = e.EmployeeID
);

SELECT c.CustomerName, o.OrderDate
FROM Customers c
INNER JOIN Orders o ON c.CustomerID = o.CustomerID;

4. Pagination and Limiting Results
When dealing with large datasets, implementing pagination techniques becomes crucial to enhance user experience and minimize resource consumption. SQL Server provides functionalities like `OFFSET-FETCH` or using `ROW_NUMBER()` in conjunction with `ORDER BY` clauses to implement pagination efficiently.

SQL Code Example
SELECT ProductID, ProductName, UnitPrice
FROM Products
ORDER BY ProductID
OFFSET 10 ROWS FETCH NEXT 5 ROWS ONLY;


5. Caching and Materialized Views
Caching frequently accessed data or utilizing materialized views can reduce the computational overhead associated with repetitive complex queries. SQL Server offers caching mechanisms like Query Store and the use of indexed views, which store precomputed results, thereby accelerating data retrieval for specific queries.

SQL Code Example
CREATE VIEW MonthlySales
WITH SCHEMABINDING
AS
SELECT YEAR(OrderDate) AS OrderYear, MONTH(OrderDate) AS OrderMonth, SUM(TotalAmount) AS TotalSales
FROM Orders
GROUP BY YEAR(OrderDate), MONTH(OrderDate);

CREATE UNIQUE CLUSTERED INDEX IX_MonthlySales_OrderYear_OrderMonth
ON MonthlySales (OrderYear, OrderMonth);


6. Parallel Execution and Resource Management

Leveraging SQL Server's ability to execute queries in parallel can significantly improve performance, especially for CPU-intensive operations. Utilizing features like parallel query execution and configuring resource governor to manage CPU and memory usage optimally can enhance overall system efficiency.

SELECT /*+ MAXDOP 4 */ *
FROM LargeTable
WHERE SomeCondition
;

7. Monitoring and Performance Tuning
Regular monitoring of database performance using built-in tools like SQL Server Profiler or Extended Events allows for the identification of performance bottlenecks. Performance tuning by analyzing wait statistics, identifying long-running queries, and optimizing them based on execution plans is crucial for maintaining an efficient database environment.

SELECT TOP 10
    total_elapsed_time / execution_count AS avg_duration,
    execution_count,
    total_logical_reads / execution_count AS avg_logical_reads,
    total_logical_writes / execution_count AS avg_logical_writes,
    sql_text.text AS query_text
FROM sys.dm_exec_query_stats AS query_stats
CROSS APPLY sys.dm_exec_sql_text(query_stats.sql_handle) AS sql_text
ORDER BY avg_duration DESC;

HostForLIFEASP.NET SQL Server 2022 Hosting





European SQL Server 2022 Hosting :: Understanding, Using, and Implementing Time Series Databases

clock November 29, 2023 07:07 by author Peter

The ability to successfully handle and evaluate time-sensitive information is critical in today's data-driven environment. This is especially true in businesses like finance, IoT, and healthcare, which generate massive amounts of time-stamped data on a daily basis. Traditional relational databases frequently struggle to meet the needs of storing and retrieving such data, resulting in the development of specialist systems known as Time Series Databases (TSDBs).

What exactly are Time Series Databases?
TSDBs are specifically built to excel at keeping and interpreting time-indexed data points, providing optimal performance for time series data. They have a number of advantages over regular databases.

  • Efficient Data Storage: TSDBs use specific data structures and compression techniques to efficiently store time series data, reducing storage requirements.
  • Optimized Querying: TSDBs provide sophisticated query languages designed specifically for time series analysis, allowing users to retrieve and analyze data fast based on time ranges, aggregations, and other criteria.
  • Real-time Analytics: TSDBs are built to handle real-time data input and analysis, allowing businesses to monitor and respond to events as they happen.

Time Series Databases That Are Popular
Several popular TSDBs have evolved, each with its own distinct set of features and advantages. Here are some noteworthy examples.
InfluxDB

An open-source TSDB is well-known for its fast querying capabilities and support for massive amounts of time-stamped data.

# Sample InfluxDB Query
from influxdb import InfluxDBClient

client = InfluxDBClient(host='localhost', port=8086)
client.switch_database('mydatabase')

result = client.query("SELECT * FROM mymeasurement WHERE time > now() - 1d")
print(result)


Prometheus
A monitoring and alerting toolkit that excels in collecting and storing metrics using a multidimensional data model and a powerful query language (PromQL).
# Sample PromQL Query
http_requests_total{job="api-server", status="200"}


Elasticsearch
While primarily known as a full-text search and analytics engine, Elasticsearch also offers robust time series data capabilities, handling large-scale time series data with its distributed architecture.

# Sample Elasticsearch Query
{
"query": {
"range": {
  "@timestamp": {
    "gte": "now-1d/d",
    "lt": "now/d"
  }
}
}
}


OpenTSDB
Built on top of the Hadoop Distributed File System (HDFS), OpenTSDB is designed for scalability, leveraging HBase for storing and retrieving time series data, making it suitable for large-scale deployments.
# Sample OpenTSDB Query
tsdquery summary  -start 1h-ago -end now -m avg:metric_name

Graphite
A lightweight TSDB that focuses on simplicity and ease of use supporting the Graphite Query Language (GQL) and is well-suited for small to medium-sized deployments.

# Sample Graphite Query
summarize(metric_name, "1h", "sum")


Use Cases of Time Series Databases

TSDBs find applications across a wide range of industries and use cases.

  • Financial Analytics: Analyzing historical market data, tracking transactions, and predicting trends are essential for financial institutions. TSDBs enable real-time monitoring of stock prices, currency exchange rates, and other financial metrics.
  • IoT Data Management: With the proliferation of IoT devices, TSDBs are instrumental in handling the vast amount of data generated by sensors and devices. These databases enable organizations to monitor and analyze data from IoT devices in real-time, leading to informed decision-making.
  • Infrastructure Monitoring: TSDBs find extensive use in monitoring and managing the performance of IT infrastructure. They help organizations track metrics related to server health, network latency, and application response times, facilitating proactive issue detection and resolution.
  • Healthcare Systems: In healthcare, time series databases are employed to store and analyze patient data, monitor vital signs, and track the efficacy of treatments over time. These databases contribute to improved patient care and the advancement of medical research.


Conclusion
Time series databases have become indispensable tools in the modern data landscape, offering specialized solutions for handling the unique challenges posed by time-stamped data. From monitoring and analytics to financial modeling and IoT applications, the use cases for TSDBs continue to expand as the volume of time series data generated across industries continues to grow.

Each database mentioned here brings its own strengths to the table, catering to diverse needs in terms of scalability, performance, and ease of use. As organizations strive to harness the power of their time-series data, TSDBs will play an increasingly crucial role in enabling data-driven decision-making and unlocking new insights.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: How to Avoid the SQL Server Query Governor Cost Limit Error

clock November 14, 2023 06:59 by author Peter

If you've ever seen the error message
"The query has been canceled because its estimated cost (X) exceeds the configured X-1 threshold." "Speak with the system administrator."

You might be wondering what a SQL Server query means and how to remedy it when running it. In this essay, I'll look at the source of the problem and how to solve it.

What is the query governor's cost limit?
The query governor cost limit is a SQL Server setting that specifies the maximum time for a query to run. The duration, stated in seconds, is based on an approximation of the query cost for a specific hardware configuration. This setting is intended to prevent persistent queries from consuming too many resources and interfering with the functioning of other queries and applications.

Because the query governor cost limit is set to 3000 by default, any query that is projected to take more than 3000 seconds—or 50 minutes—will be canceled. This value can be changed by the system administrator using the stored function sp_configure.

Why am I getting the error?
If your query has a high expected cost and is too complex or inefficient, you may receive an error warning. The number and size of the tables involved, the indexes and statistics available, the join and aggregation techniques used, the filters and sorting algorithms used, and the SQL Server version and edition all have an impact on how much a query is estimated to cost.

Running a query that entails scanning a large table with no indexes or filters, for example, could result in a high estimated cost. Similarly, performing a query that calls a function or subquery for each row in the return set could result in an extremely high projected cost.

How do I fix the mistake?
The two most likely remedies are to increase the query governor's cost limit or optimize your query. Increase the query governor's cost limit: To increase the query governor cost limit for the active connection, use the SET QUERY_GOVERNOR_COST_LIMIT command. For example, if the estimated cost of the query is 14.000, you may execute the following line before running the query:

SET QUERY_GOVERNOR_COST_LIMIT 15000

Here is an example of how to utilize it:
using Microsoft.Data.SqlClient;
using System.Data;

var connectionString = "Server=YourServerName;Database=YourDatabaseName;User=YourUserName;Password=YourPassword;";

// Create a SqlConnection and SqlCommand
using SqlConnection connection = new SqlConnection(connectionString);

connection.Open();

// Set the QUERY_GOVERNOR_COST_LIMIT before executing the query
var setQueryGovernorCostLimit = "SET QUERY_GOVERNOR_COST_LIMIT 15000;";
using (SqlCommand setQueryGovernorCommand = new SqlCommand(setQueryGovernorCostLimit, connection))
{
setQueryGovernorCommand.ExecuteNonQuery();
}

// Complex SELECT query
var selectQuery = @"
SELECT
    Orders.OrderID,
    Customers.CustomerName,
    Cars.CarModel,
    Colors.ColorName,
    Engines.EngineType,
    Wheels.WheelType,
    Interiors.InteriorType,
    Accessories.AccessoryName,
    Features.FeatureName,
    Options.OptionName,
    -- Add more fields related to the customized automobiles here
FROM
    Orders
    INNER JOIN Customers ON Orders.CustomerID = Customers.CustomerID
    INNER JOIN Cars ON Orders.CarID = Cars.CarID
    INNER JOIN Colors ON Orders.ColorID = Colors.ColorID
    INNER JOIN Engines ON Orders.EngineID = Engines.EngineID
    INNER JOIN Wheels ON Orders.WheelID = Wheels.WheelID
    INNER JOIN Interiors ON Orders.InteriorID = Interiors.InteriorID
    LEFT JOIN OrderAccessories ON Orders.OrderID = OrderAccessories.OrderID
    LEFT JOIN Accessories ON OrderAccessories.AccessoryID = Accessories.AccessoryID
    LEFT JOIN CarFeatures ON Cars.CarID = CarFeatures.CarID
    LEFT JOIN Features ON CarFeatures.FeatureID = Features.FeatureID
    LEFT JOIN CarOptions ON Cars.CarID = CarOptions.CarID
    LEFT JOIN Options ON CarOptions.OptionID = Options.OptionID
WHERE
    -- Your filter conditions for customizations or specific orders here
";


using SqlCommand command = new SqlCommand(selectQuery, connection);

using SqlDataAdapter adapter = new SqlDataAdapter(command);

DataTable dataTable = new DataTable();
adapter.Fill(dataTable);


This will prevent the cancellation of your query. This method is not advised because it can impact the performance of other queries and apps. Additionally, this statement needs to be executed for each connection that requires the query to be done.

  • Optimize your query: Reducing the anticipated cost of your query optimization is a preferable course of action. The following methods can help you enhance the efficiency of your queries.
  • Use indexes: By reducing the amount of scanning and sorting necessary, indexes can help SQL Server locate the data more quickly. The columns used in your query's join, filter, and order by clauses can all have indexes created on them. The Database Engine Tuning Advisor can also recommend the best indexes for your query.
  • Use statistics:  SQL Server can estimate the number of rows and value distribution in tables with statistics. This can assist the SQL Server in selecting the query's optimal execution strategy. The UPDATE STATISTICS statement and the sp_updatestats stored procedure can be used to update the statistics in your tables.
  • Simplify your query: You can make your query simpler by eliminating any extraneous tables, joins, filters, columns, or sorting. Your query can be rewritten to take advantage of more effective operators, like EXISTS rather than IN, or it can be divided into shorter searches using temporary tables or common table expressions (CTEs).
  • Use the latest SQL Server version and edition: Use the most recent edition and version of SQL Server: These may offer improved performance and optimization features over previous iterations. For instance, the projected cost of your query for SQL Server 2022 Web Edition can be less than that of SQL Server 2019 Web Edition. The @@VERSION function allows you to verify your database's SQL Server version and edition.

The query governor cost limit error is a common problem in SQL Server that occurs when your query exceeds the configured threshold and has a high estimated cost. Increasing the query governor cost limit or optimizing the query are two solutions to this problem. Nonetheless, query optimization is the ideal option because it can improve query performance and efficiency while avoiding potential resource issues.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: Recover Deleted or Modified Data Using Transaction Log

clock November 1, 2023 08:28 by author Peter

In this piece, we'll go over LSN, the Retrieved function fn_dblog(), and the Recovery Models that go with it in further depth.

Log of SQL Server Transactions
Every SQL Server database has a transaction log that records all transactions as well as the database changes made by each transaction. The transaction log is a crucial component of the database, and if there is a system failure, the transaction log may be necessary to restore consistency to your database.

The SQL Server Transaction Log is useful for recovering deleted or updated data if you run a DELETE or UPDATE operation with the incorrect condition or without filters. This is accomplished by listening to the records included within the SQL Server Transaction Log file. DML operations such as INSERT, UPDATE, and DELETE statements, as well as DDL operations such as CREATE and DROP statements, are among the events that are written to the SQL Server Transaction Log file without any additional configuration from the database administrator.

LSN Function Retrieving from Transaction Log
fn_dblog() is a function.

The fn_dblog() function is one of the SQL Server undocumented functions; it allows you to see transaction log records in the active section of the transaction log file.
fn_dblog() Arguments

The fn_dblog() function takes two arguments.

  • The first is the initial log sequence number, abbreviated as LSN. You can alternatively use NULL to return everything from the beginning of the log.
  • The second is the LSN at the conclusion. You can alternatively specify NULL, which indicates that you wish to return everything to the log's conclusion.

fn_db_log()
SELECT * FROM fn_dblog (
              NULL, -- Start LSN nvarchar(25)
              NULL  -- End LSN nvarchar(25)
       )

We will not discuss the details of the function but list the possibly useful columns below:

Operation Clumn in fn_dblog()
The ‘Operation’ column indicates the type of operation that has been performed and logged in the transaction log file.

Such as test.

LOP_INSERT_ROWS

LOP_MODIFY_ROW

LOP_DELETE_ROWS

LOP_BEGIN_XACT

Recovery Models
SQL Server backup and restore operations occur within the context of the recovery model of the database. Recovery models are designed to control transaction log maintenance. A recovery model is a database property that controls how transactions are logged, whether the transaction log requires (and allows) backing up, and what kinds of restore operations are available.

Three recovery models exist:
    Simple --- Operations that require transaction log backups are not supported by the simple recovery model.
    Full, and
    Bulk-logged.

Typically, a database uses the full recovery model or simple recovery model. A database can be switched to another recovery model at any time. By default, when a new database is created within Microsoft SQL, the recovery model option is set to full.

Recovery Option
Restoring the db with the RECOVERY option is the default option for users with FULL backup. However, if you have different types of backups (differential, transactional, etc.), you may need to use the NORECOVERY option to combine the backups.

Even Recover database without Restore
RESTORE DATABASE *database_name* WITH RECOVERY

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: How to check if SQL table is temporal?

clock October 30, 2023 10:49 by author Peter

There are two methods for determining whether a SQL table is temporal: Examine the table definition. There are two system-versioned period entries in temporal tables: sys_start_time and sys_end_time. These columns include the beginning and end dates of the time period for which the row is valid. A table with these two columns is a temporal table.

The OBJECTPROPERTY() function should be used. The OBJECTPROPERTY() function returns information about a database object, such as whether it is a temporal table. To determine whether a table is temporal, use the SQL statement:

SELECT OBJECTPROPERTY(object_id('table_name'), 'IsTemporalTable');

If the query returns a value of 1, the table is temporal. Otherwise, the table is not chronological. For instance, the following SQL statement will determine whether the Products table is temporal:

SELECT OBJECTPROPERTY(object_id('Products'), 'IsTemporalTable');

If the result of the query is 1, then the Products table is temporal. Otherwise, the Products table is not temporal.

Note: Temporal tables are only supported in Microsoft SQL Server 2016 and later versions.

Here are some additional things to keep in mind about temporal tables:

  • Temporal tables can be either system-versioned or history-table versioned. System-versioned temporal tables are managed by the database engine, while history-table versioned temporal tables are managed by the user.
  • Temporal tables can be used to store both current and historical data. Current data is stored in the table, while historical data is stored in a history table.
  • Temporal tables can be used to query both current and historical data. The FOR SYSTEM_TIME clause can be used to specify the time period for which the data should be returned.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: SQL Server optional parameters

clock October 27, 2023 07:50 by author Peter

This article will teach you about optional parameters and how to use them in stored procedures.

Parameters that are optional
Optional parameters are a sort of parameter in which you can declare a default value, and if that parameter is not passed, the default value is used.
By using several examples, let's establish a "Employees" database and learn how to write a stored method with Optional Parameters.

CREATE TABLE Employees (
EMPID INT PRIMARY KEY,
EMPName VARCHAR (50),
EMPAddress VARCHAR (Max),
ContactNumber VARCHAR (10),
EMPSalary INT
)

--Inserting multiple records in one table.

INSERT INTO Employees (EMPID, EMPName, EMPAddress, ContactNumber, EMPSalary)
VALUES (101, 'Peter', 'London', NULL, 65000),
(102, 'Scott', 'Manchester', 1234567809, 55000),
(103, 'Laura', 'Leeds', NULL, 95000),
(104, 'Samuel', 'Liverpool', 1234506789, 35000)

Now check the table by running a select query.
SELECT * FROM Employees;

Now create stored proc, which inserts records in the table with mandatory parameters.
CREATE OR ALTER PROCEDURE usp_InsertEmployeesRecords
@EMPID INT PRIMARY KEY,
@EMPName VARCHAR (50),
@EMPAddress VARCHAR (Max),
@ContactNumber VARCHAR (10),
@EMPSalary INT
AS
BEGIN
    INSERT INTO Employees VALUES (@EMPID, @EMPName, @EMPAddress, @ContactNumber, @EMPSalary)
    SELECT * FROM Employees                     --Checking if all the records inserted in Employee table or not.
END

 

The stored procedure “usp_InsertEmployeesRecords” was created
Now let’s execute the same procedure and see the output.

EXECUTE usp_InsertEmployeeRecords 105, 'Maria', 'Leeds', 1203456789, 70000

As you can see above, all records are inserted in the table as values passed for all mandatory parameters.
Now let’s go with another scenario when a user doesn’t have all the information and inserts records in the table for example missing contact number.
Executing the same procedure again.

EXECUTE usp_InsertEmployeesRecords @EMPID = 106, @EMPName = 'Ricky', @EMPAddress = 'London', @EMPSalary = 80000;

After executing the procedure, we got the following error, which means we had created proc with mandatory parameters, and we have not passed the values for all the parameters.

Now to overcome with above error, let's create the proc with an optional parameter.

CREATE OR ALTER PROCEDURE usp_InsertEmployeesRecordsOptParam
@EMPID INT PRIMARY KEY,
@EMPName VARCHAR (50),
@EMPAddress VARCHAR (Max),
@ContactNumber VARCHAR (10) = 'NULL',
@EMPSalary INT
AS
BEGIN
    INSERT INTO Employees VALUES (@EMPID, @EMPName, @EMPAddress, @ContactNumber, @EMPSalary)
    SELECT * FROM Employees                     --Checking if all the records inserted in Employee table or not.
END

The stored procedure “usp_InsertEmployeesRecordsOptParam” was created, as you can see in the above screenshot.

 

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: What exactly is a temporal table?

clock October 17, 2023 09:35 by author Peter

A temporal table is a sort of database table that is intended to track changes over time without the need for sophisticated custom programming or triggers. It accomplishes this by including system-generated columns in the table that reflect each row's valid time period. System-Versioned Temporal Tables are another name for them.

Temporal Tables with System Verification

The system automatically tracks changes to the data over time in this form of temporal table by creating a new version of each row every time it is modified or removed. The original row is kept with its valid time period, and a new version with its own valid time period is created. This enables you to view the table's history at any moment and query it as if it were a snapshot of the data at that time.

CREATE TABLE dbo.MyTemporalTable (
Id INT PRIMARY KEY,
Name VARCHAR(50),
EffectiveStartTime DATETIME2 GENERATED ALWAYS AS ROW START,
EffectiveEndTime DATETIME2 GENERATED ALWAYS AS ROW END,
PERIOD FOR SYSTEM_TIME (EffectiveStartTime, EffectiveEndTime)
) WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.MyTemporalTable_History));


How to query the temporal table?
Temporal tables, like database tables, can be queried. To query data across the main and history tables, a new clause FOR SYSTEM_TIME with five temporal-specific sub-clauses has been added. This new SELECT statement syntax is usable on a single table, via multiple joins, and via views on top of several temporal tables.

The five sub-clauses
AS OF <DateTime>
FROM <EffectiveStartTime> TO <EffectiveEndTime>
BETWEEN <EffectiveStartTime> AND <EffectiveEndTime>
CONTAINED IN (EffectiveStartTime, <EffectiveEndTime>)
ALL

-- AS OF (EffectiveStartTime<= date_time AND EffectiveEndTime> date_time)
SELECT * FROM dbo.MyTemporalTable FOR SYSTEM_TIME AS OF '2023-03-10 T10:00:00.0000000'

-- FROM (EffectiveStartTime< date_time AND EffectiveEndTime > date_time)
SELECT * FROM dbo.MyTemporalTable FOR SYSTEM_TIME FROM '2022-01-01' TO '2022-12-31'

--BETWEEN (EffectiveStartTime<= date_time AND EffectiveEndTime > date_time)
SELECT * FROM dbo.MyTemporalTable FOR SYSTEM_TIME BETWEEN '2022-01-01' AND '2022-12-31'

--CONTAINED (EffectiveStartTime>= date_time AND EffectiveEndTime <= date_time)
SELECT * FROM dbo.MyTemporalTable FOR SYSTEM_TIME CONTAINED IN ('2023-01-01', '2023-02-28')

--ALL (All rows)
SELECT * FROM dbo.MyTemporalTable FOR SYSTEM_TIME ALL

How to drop the temporal table?
Dropping a temporal table is similar to dropping any other table in your database. However, we need to consider a few things before dropping a temporal table to avoid data loss and ensure that the cleanup process is handled correctly.

Here are the general steps to drop a temporal table.

Drop the dependent objects: Before dropping the temporal table, you should first drop any dependent objects, such as triggers, indexes, or foreign keys that reference the temporal table.

Disable temporal feature: Disable the temporal feature of the table by running the ALTER TABLE statement to remove the system-versioning or application-time period specification.
ALTER TABLE dbo.MyTemporalTable SET (SYSTEM_VERSIONING = OFF);

Drop the temporal table: After removing all dependent objects and disabling the temporal feature of the table, drop the table using the DROP TABLE statement.
DROP TABLE IF EXISTS dbo.MyTemporalTable;

Limitations of temporal table

  • Storage requirements: System-versioned temporal tables require additional storage to maintain historical versions of the data. The size of the history table can grow quickly, especially for tables with many updates or deletes, which can impact performance and increase storage costs.
  • Query performance: Querying a temporal table can be more complex than querying a regular table, especially if you need to retrieve data from both the current and historical versions of the data. Depending on the database system, query performance may be slower for temporal tables than for regular tables.
  • Configuration complexity: Setting up a temporal table requires some additional configuration steps compared to a regular table. This can make it more complex to create and maintain the table, especially if you are not familiar with the temporal table syntax.
  • Compatibility: Not all database systems support temporal tables. While SQL Server, Oracle, and PostgreSQL all support temporal tables, other systems may not. This can limit your ability to use temporal tables if you need to migrate to a different database system in the future.
  • Migration complexity: If you have an existing table that you want to make temporal, you may need to perform a data migration to move the existing data to the history table. This can be complex and time-consuming, especially for large tables.

Advantages of temporal table
The main advantage of a temporal table is that it allows you to maintain a full history of changes to your data over time. This provides several benefits, including:

  • Auditing: With a temporal table, you can easily track changes to your data over time. This is useful for auditing purposes, as you can see when and how data was modified, deleted, or added.
  • Compliance: Many industries have regulatory requirements that mandate data retention policies. By maintaining a full history of changes to your data, you can ensure compliance with these requirements.
  • Recovery: In the event of data loss or corruption, a temporal table can be used to recover lost data. You can restore the table to a previous point in time and recover data that was lost or corrupted.
  • Analysis: A temporal table can be used for trend analysis, as you can see how data has changed over time. This can be useful for identifying patterns, trends, and anomalies in your data.
  • Data versioning: With a temporal table, you can easily maintain multiple versions of your data. This is useful if you need to maintain different versions of a record for different users or if you need to create a new version of a record without overwriting the previous version.

Overall, the temporal table provides a powerful mechanism for maintaining a full history of changes to your data. This can be useful for a variety of purposes, including auditing, compliance, recovery, analysis, and data versioning. By understanding the limitations of temporal tables, you can make an informed decision about whether they are the right solution for your needs.



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in