European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

European SQL Server 2022 Hosting :: How to Create a Data Mart from the Ground Up: A Guide for BI Teams Focused on Data Warehouses and SQL

clock August 1, 2025 08:01 by author Peter

Specifically, we will leverage data warehouse ideas, SQL methodologies, and BI-focused schema architecture to design and implement a Data Mart in this article.

Step 1. Understand Business Requirements for the Data Warehouse

  • Engage business stakeholders to define reporting needs.
  • Identify key facts (e.g., total sales, loan balance) and dimensions (e.g., product, branch, customer).
  • Define the grain of the data mart (e.g., "daily branch sales").

Step 2. Design the Data Warehouse Schema (Dimensional Modeling)
Use a Star Schema or Snowflake Schema, optimized for SQL queries.

  • Fact Table (e.g., Fact_Sales): Contains numeric metrics and foreign keys
  • Dimension Tables (e.g., Dim_Customer, Dim_Product): Contain descriptive attributes to slice and filter data

Step 3. Source to Target Mapping (STM)

  • Identify source systems (e.g., OLTP DBs like Core Banking, ERP).
  • Document field-level mappings from source to warehouse tables.
  • Define SQL-based transformations (e.g., date format conversion, category lookup).

Step 4. Build the ETL in SQL or ELT with dbt
Using SQL or dbt (Data Build Tool).

  • Extract: Pull raw data from staging tables.
  • Transform: Use SQL CTEs or dbt models to clean and join.
  • Load: Insert transformed data into fact/dimension tables.

Tips

  • Use SCD Type 2 for dimensions
  • Maintain surrogate keys
  • Use indexes and partitions for performance

Step 5. Connect to BI Tools

  • Expose the SQL-based data mart to Power BI, Tableau, or Looker.
  • Use pre-aggregated views for performance.
  • Provide semantic consistency by using dimensional hierarchies.

Step 6. Testing and Optimization

  • Write SQL test cases for row counts, null checks, and data accuracy.
  • Schedule SQL scripts or dbt jobs for refresh.
  • Monitor performance with query logs or warehouse query history.

Conclusion
Building a Data Mart using SQL and data warehousing best practices is essential for scalable and high-performing BI. With the right design and automation in tools like dbt or SQL scripts, your organization can gain accurate, fast insights. In the next article, we’ll walk through building a Sales Data Mart in SQL + dbt with dimensional modeling and snapshotting logic.

HostForLIFEASP.NET SQL Server 2022 Hosting



European MySQL Hosting - HostForLIFE.eu :: Reporting Progress Using MySqlBackup.NET

clock July 28, 2025 08:03 by author Peter

In this thorough tutorial, we'll show you how to use MySqlBackup.NET to create stunning, real-time progress reports for database backup and restore processes. This lesson will assist you in creating polished progress indicators that tell your users at every stage of the development process, whether you're creating a web interface or a desktop program.

What is MySqlBackup.NET?
MySqlBackup.NET is a C# open source tool used for backup and restore of MySQL database. Find out more at the github repository.

Live Demo
Before we dive into the code, check out this visual demonstration of what we’ll be building:

This video showcases the real-time progress reporting system in action, complete with multiple beautiful CSS themes that you can customize for your application.

An overall of the Lifecycle of Export Progress
Everything is happening all at once:


Steps to Understanding the Foundation
In the export process, MySqlBackup.NET provides the progress status through the event mb.ExportProgressChanged:
void Backup()
{
    string fileBackup = Server.MapPath("~/backup.sql");

    using (var conn = new MySqlConnection(constr))
    using (var cmd = conn.CreateCommand())
    using (var mb = new MySqlBackup(cmd))
    {
        conn.Open();
        mb.ExportInfo.IntervalForProgressReport = 200; // 5 updates per second
        mb.ExportProgressChanged += Mb_ExportProgressChanged;
        mb.ExportToFile(fileBackup);

    }
}

private void Mb_ExportProgressChanged(object sender, ExportProgressArgs e)
{
    long totalRows = e.TotalRowsInAllTables;
    long currentRow = e.CurrentRowIndexInAllTables;
    int percent = CalculatePercent(taskInfo.TotalRows, taskInfo.CurrentRow);
}

Notice that the above code is running in a single thread (the main thread), but in order to have a progress report, we need to move the main process of MySqlBackup.NET to another secondary thread, so-called asynchronous in C#.

Dedicating the Export Process to another thread (asynchronous), by using System.Threading.Task
void Backup()
{
    // Asynchronous
    // Export process is splitted into another thread
    // firing the task in another thread
    _ = Task.Run(() => { BeginBackup(); });
}

void BeginBackup()
{
    using (var conn = new MySqlConnection(constr))
    using (var cmd = conn.CreateCommand())
    using (var mb = new MySqlBackup(cmd))
    {
        conn.Open();
        mb.ExportInfo.IntervalForProgressReport = 200; // 5 updates per second
        mb.ExportProgressChanged += Mb_ExportProgressChanged;
        mb.ExportToFile(fileBackup);
    }
}

C#

Next, creating the Intermediary Caching Data Centre.

Write a simple class to hold the values:

public class TaskInfo
{
    public bool IsComplete { get; set;} = false;
    public int Percent { get; set; } = 0;
    public long TotalRows { get; set; } = 0L;
    public long CurrentRow { get; set; } = 0L;
}


Declare a global static ConcurrentDictionary to hold the class. ConcurrentDictionary is a specialized thread safe variable. It allows multiple different threads to safely read and write the values.

We are marking it static so that the caching values can be accessed by all threads (all request) of the whole application. 
public class api
{
    // accesible by any thread concurrently
    static ConcurrentDictionary<int, TaskInfo> dicTask = new ConcurrentDictionary<int, TaskInfo>();

    void Backup()
    {
        _ = Task.Run(() => { BeginBackup(); });
    }

    void BeginBackup()
    {
        ...
    }

    private void Mb_ExportProgressChanged(object sender, ExportProgressArgs e)
    {
        ...
    }
}

Next, we are going to receive the progress status values provided by MySqlBackup.NET:
public class api
{
    static ConcurrentDictionary<int, TaskInfo> dicTask = new ConcurrentDictionary<int, TaskInfo>();

    void Backup()
    {
        // get a new taskid, can be a random number
        int newTaskId = GetNewTaskId();

        _ = Task.Run(() => { BeginBackup(newTaskId); });
    }

    // caching the taskid in this thread
    int taskid = 0;

    void BeginBackup(int newTaskId)
    {
        // cache the task id
        taskid = newTaskId;

        string fileBackup = Server.MapPath("~/backup.sql");

        // Create a new task info to collect data
        TaskInfo taskInfo = new TaskInfo();

        // Cache the task info into the global dictionary
        dicTask[newTaskId] = taskInfo;

        using (var conn = new MySqlConnection(constr))
        using (var cmd = conn.CreateCommand())
        using (var mb = new MySqlBackup(cmd))
        {
            conn.Open();
            mb.ExportInfo.IntervalForProgressReport = 200; // 5 updates per second
            mb.ExportProgressChanged += Mb_ExportProgressChanged;
            mb.ExportToFile(fileBackup);
        }

        // Passing this point, mark the completion of the process

        // Informing the frontend about the completion
        taskInfo.IsComplete = true;
    }

    private void Mb_ExportProgressChanged(object sender, ExportProgressArgs e)
    {
        // Retrieve the task info belongs to this task id
        if (dicTask.TryGetValue(taskid, out var taskInfo))
        {
            // Collect the data provided by the export process
            taskInfo.TotalRows = e.TotalRowsInAllTables;
            taskInfo.CurrentRow = e.CurrentRowIndexInAllTables;
            taskInfo.Percent = CalculatePercent(taskInfo.TotalRows, taskInfo.CurrentRow);
        }
    }
}

Now, let’s integrate this logic into the application.
In this article, we’ll be using ASP.NET Web Forms as our demonstration platform to show how progress reporting integrates seamlessly with web applications.

In this demo of ASP.NET Web Forms, we’ll be using:

  • ZERO VIEWSTATE  – The secret to modern Web Forms
  • Zero server control
  • No custom user control
  • No postback
  • No UpdatePanel
  • Pure HTML, CSS, and JavaScript / FetchApi

Important Note: Once you understand the core working mechanism and patterns demonstrated in this walkthrough, you can easily replicate these same ideas to build your own UI progress reporting system in any application framework. Whether you’re developing with:

  • Windows Forms (WinForms) applications
  • .NET Core web applications
  • ASP.NET MVC projects
  • Blazor applications
  • Mono cross-platform apps
  • Console applications with custom UI
  • WPF desktop applications

The fundamental principles remain the same: create an intermediary caching layer, implement the two-thread pattern, and provide real-time UI updates. The specific implementation details will vary by framework, but the core architecture and MySqlBackup.NET integration patterns are universally applicable.

Let’s begin by building our backend API to handle the backup and restore logic. We’ll create a new blank ASP.NET Web Forms page. A typical new blank web form page will look something like this:
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="apiBackup.aspx.cs" Inherits="myweb.apiBackup" %>

<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
        <div>
        </div>
    </form>
</body>
</html>


Delete everything from the frontend markup, leaving only the page directive declaration at the top:
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="api.aspx.cs" Inherits="myweb.api" %>

Now, go to the code behind, which typically look like this:
namespace myweb
{
    public partial class apiBackup : System.Web.UI.Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {

        }
    }
}


Patch the code that we’ve written previously to the backend of this web form page:
// backend C#

public partial class apiBackup : System.Web.UI.Page
{
    string constr = "server=localhost;user=root;pwd=1234;database=mydatabase;";

    static ConcurrentDictionary<int, TaskInfo> dicTask = new ConcurrentDictionary<int, TaskInfo>();

    protected void Page_Load(object sender, EventArgs e)
    {
        // get the action for the request from the frontend
        string action = Request["action"] + "";

        switch (action)
        {
            case "backup":
                Backup();
                break;
        }
    }

    int taskid = 0;

    void Backup()
    {
        int newTaskId = GetNewTaskId();

        // run the task in the background
        // no waiting
        _ = Task.Run(() => { BeginBackup(newTaskId); });

        // immediately tell the task id to the frontend
        Response.Write(newTaskId.ToString());
    }

    void BeginBackup(int newTaskId)
    {
        taskid = newTaskId;

        string fileBackup = Server.MapPath("~/backup.sql");

        TaskInfo taskInfo = new TaskInfo();

        dicTask[newTaskId] = taskInfo;

        using (var conn = new MySqlConnection(constr))
        using (var cmd = conn.CreateCommand())
        using (var mb = new MySqlBackup(cmd))
        {
            conn.Open();
            mb.ExportInfo.IntervalForProgressReport = 200; // 5 updates per second
            mb.ExportProgressChanged += Mb_ExportProgressChanged;
            mb.ExportToFile(fileBackup);
        }

        taskInfo.IsComplete = true;
    }

    private void Mb_ExportProgressChanged(object sender, ExportProgressArgs e)
    {
        if (dicTask.TryGetValue(taskid, out var taskInfo))
        {
            taskInfo.TotalRows = e.TotalRowsInAllTables;
            taskInfo.CurrentRow = e.CurrentRowIndexInAllTables;
            taskInfo.Percent = CalculatePercent(taskInfo.TotalRows, taskInfo.CurrentRow);
        }
    }


Now, we have built an api endpoint that allows the frontend to tell the server to start a backup task.

There are two main ways that the frontend can do this with:

  • Get Request or
  • Post Request

Using Get Request:
// frontend javascript

// cache the task id globally at the frontend
let taskid = 0;

async function backup() {
    const response = await fetch('/api?action=backup', {
        method: 'GET',
        credentials: 'include'
    });

    // obtain the task id returned from the backend

    // convert the return response into text
    const responseText = await response.text();

    // convert the text into number (task id)
    taskid = parseInt(responseText);
}


Using Post Request:
// frontend javascript

async function backup() {

    const formData = new FormData();
    formData.append('action', 'backup');

    const response = await fetch('/api', {
        method: 'POST',
        body: formData,
        credentials: 'include'
    });

    // obtain the task id returned from the backend

    // convert the return response into text
    const responseText = await response.text();

    // convert the text into number (task id)
    taskid = parseInt(responseText);
}

In this context, it is recommended to use a POST request, as the query string will not be exposed to the server log.

Next, at the frontend, write a new function getStatus() to get the progress status from the backend:
// frontend javascript

async function getStatus() {

    const formData = new FormData();
    formData.append('action', 'getstatus');
    formData.append('taskid', taskid);

    const response = await fetch('/api', {
        method: 'POST',
        body: formData,
        credentials: 'include'
    });

    // response = progress status
}

Now, back to the backend api, handle a new action for “getstatus”:
protected void Page_Load(object sender, EventArgs e)
{
    string action = Request["action"] + "";

    switch (action)
    {
        case "backup":
            Backup();
            break;
        case "getstatus":
            GetStatus();
            break;
    }
}

void GetStatus()
{
    string requestTaskId = Request["taskid"] + "";

    int _taskid = Convert.ToInt32(requestTaskId);

    // get the task info from the global dictionary
    if (dicTask.TryGetValue(_taskid, out var taskInfo))
    {
        // convert the class object into json string

        // Install the Nuget Package of "System.Text.JSON" to enable this function
        // You also need to add an 'using' line at the top of your code to use this:
        // using System.Text.JSON;

        string json = JsonSerializer.Serialize(taskInfo);
        Response.ContentType = "application/json";

        // send the progress status back to the frontend
        Response.Write(json);
    }
}


Example of responding text (JSON):
// json

{
  "IsComplete": false,
  "Percent": 50,
  "TotalRows": 10000,
  "CurrentRow": 5000
}

Go back to the frontend, continue to handle the request at the frontend:
// frontend javascript

async function getStatus() {
    const formData = new FormData();
    formData.append('action', 'getstatus');
    formData.append('taskid', taskid);

    const response = await fetch('/api', {
        method: 'POST',
        body: formData,
        credentials: 'include'
    });

    // Convert the response to text
    let result = await response.text();

    // Convert the result to JSON object
    let jsonObject = JSON.parse(result);

    // You can now work with progress value:
    document.getElementById("spanIsComplete").textContent  = jsonObject.IsComplete;
    document.getElementById("spanPercent").textContent  = jsonObject.Percent + "%";
    document.getElementById("spanTotalRows").textContent  = jsonObject.TotalRows;
    document.getElementById("spanCurrentRow").textContent  = jsonObject.CurrentRow;
}

Which at the frontpage you will have the following html span containers for the above JavaScript to fill in with:
// frontend html

<p>Is Complete: <span id="spanIsComplete"></span></p>
<p>Percent: <span id="spanPercent"></span></p>
<p>Total Rows: <span id="spanTotalRows"></span></p>
<p>Current Row: <span id="spanCurrentRow"></span></p>

Now, we’ll be using an interval timer to repeatedly call the function getStatus() Every 200ms to continuously update the frontend UI.

// frontend javascript

let intervalTimer = null;

// Start monitoring
startMonitoring() {
    stopMonitoring();
    intervalTimer = setInterval(
        () => getStatus(),
        200);
},

// Stop monitoring
stopMonitoring() {
    if (intervalTimer) {

        // stop the timer
        clearInterval(intervalTimer);

        // remove the timer
        intervalTimer = null;
    }
},

Now, after the backup is successfully called to work, we’ll begin monitoring the progress by calling the interval timer:
// frontend javascript

async function backup() {

    const formData = new FormData();
    formData.append('action', 'backup');

    const response = await fetch('/api', {
        method: 'POST',
        body: formData,
        credentials: 'include'
    });

    const responseText = await response.text();

    taskid = parseInt(responseText);

    // by this point the backup task has already begun
    // start monitoring the progress status
    startMonitoring();
}

Now, the interval is set to call 5 times per second… until the task is completed.
How to call it to stop when the task is completed? by calling the function:
stopMonitoring();

Where is a good location to call the timer to stop?
You want to pause for a moment to think?
Yes, right after the value returned from the backend, where we can check the completion status. If it is completed, simply call the timer to stop:
// frontend javascript

async function getStatus() {
    const formData = new FormData();
    formData.append('action', 'getstatus');
    formData.append('taskid', taskid);

    const response = await fetch('/api', {
        method: 'POST',
        body: formData,
        credentials: 'include'
    });

    let result = await response.text();

    let jsonObject = JSON.parse(result);

    document.getElementById("spanIsComplete").textContent  = jsonObject.IsComplete;
    document.getElementById("spanPercent").textContent  = jsonObject.Percent + "%";
    document.getElementById("spanTotalRows").textContent  = jsonObject.TotalRows;
    document.getElementById("spanCurrentRow").textContent  = jsonObject.CurrentRow;

    // now check for the completion status here
    if (jsonObject.IsComplete) {
        // yes, it's completed, call the timer stop now
        stopMonitoring();
    }
}


Now, before we proceed to the full walkthrough section. Let’s do one more function.

What if we want to stop the export before it gets finished?

Let’s create another javascript function to call the backend to stop the process:
// frontend javascript

async function stopTask() {

    const formData = new FormData();
    formData.append("action", "stoptask");
    formData.append("taskid", taskid);

    // sending stop request to backend
    const response = await fetch('/api', {
        method: 'POST',
        body: formData,
        credentials: 'include'
    });
}


Back to C# backend api endpoint.

First edit the task info object to handle the stop request:
// backend C#

public class TaskInfo
{
    public bool IsComplete { get; set;} = false;
    public int Percent { get; set; } = 0;
    public long TotalRows { get; set; } = 0L;
    public long CurrentRow { get; set; } = 0L;

    // add a boolean flag for marking the task to be cancelled
    public bool RequestCancel { get; set; } = false;
}

Let's handle the new stop request:
```csharp
// backend C#

protected void Page_Load(object sender, EventArgs e)
{
    string action = Request["action"] + "";

    switch (action)
    {
        case "backup":
            Backup();
            break;
        case "getstatus":
            GetStatus();
            break;

        // handle stop request
        case "stoptask":
            Stop();
            break;
    }
}

void Stop()
{
    string requestTaskId = Request["taskid"] + "";

    int _taskid = Convert.ToInt32(requestTaskId);

    // get the task info
    if (dicTask.TryGetValue(_taskid, out var taskInfo))
    {
        // set the boolean flag to signal the request to cancel task
        taskInfo.RequestCancel = true;
    }
}


As you can see, the additional handling of stop request does not immediately stop the process. It only set a boolean flag. We cannot directly call the MySqlBackup to stop, but instead we set the boolean to notify MySqlBackup.NET to stop.

According to the previous interval time setup, MySqlBackup.NET will interact with the backend every 200ms, that is the moment, we have the opportunity to communicate with the export process.

Here is the code that we have already written in the previous section, at the very end of this code, do the real stopping action:
// backend C#
void BeginBackup(int newTaskId)
{
    taskid = newTaskId;

    string fileBackup = Server.MapPath("~/backup.sql");

    TaskInfo taskInfo = new TaskInfo();

    dicTask[newTaskId] = taskInfo;

    using (var conn = new MySqlConnection(constr))
    using (var cmd = conn.CreateCommand())
    using (var mb = new MySqlBackup(cmd))
    {
        conn.Open();
        mb.ExportInfo.IntervalForProgressReport = 200; // 5 updates per second
        mb.ExportProgressChanged += Mb_ExportProgressChanged;
        mb.ExportToFile(fileBackup);
    }

    taskInfo.IsComplete = true;
}

private void Mb_ExportProgressChanged(object sender, ExportProgressArgs e)
{
    if (dicTask.TryGetValue(taskid, out var taskInfo))
    {
        taskInfo.TotalRows = e.TotalRowsInAllTables;
        taskInfo.CurrentRow = e.CurrentRowIndexInAllTables;
        taskInfo.Percent = CalculatePercent(taskInfo.TotalRows, taskInfo.CurrentRow);

        // MySqlBackup.NET will contact the backend every 200ms
        // Here is the oppurtunity that we can talk to MySqlBackup

        // Check the cancellation boolean flag
        if (taskInfo.RequestCancel)
        {
            // Now, here we directly control MySqlBackup to stop
            // all it's inner processes.
            // Do the real "stop" here
            ((MySqlBackup)sender).StopAllProcess();

            // Note: When a task is cancelled, you may want to clean up the partially created file
            // This will be handled in the full implementation section
        }
    }
}

Above is the basic walkthrough.

If you’ve followed along this far, congratulations! You now understand the core concepts. What follows is the production-ready implementation with all the bells and whistles – error handling, cancellation, file management, and more. Don’t worry if it looks complex – it’s built on the same foundation you just learned.
Full Walkthrough

MySqlBackup.NET provides two essential classes that make progress reporting possible:

1. ExportProgressArgs.cs

This class provides detailed information during backup operations, including:

  • Current table being processed
  • Total number of tables
  • Row counts (current and total)
  • Progress percentages

2. ImportProgressArgs.cs
This class tracks restore operations with:

  • Bytes processed vs. total bytes
  • Import completion percentage
  • Real-time progress updates

Getting Started: Basic Progress Events
The magic happens when you subscribe to progress change events. Here’s how to set up the fundamental progress reporting:

Backup Progress Monitoring

void Backup()
{
    using (var conn = new MySqlConnection(connectionString))
    using (var cmd = conn.CreateCommand())
    using (var mb = new MySqlBackup(cmd))
    {
        conn.Open();

        // Set update frequency (100ms = 10 updates per second)
        mb.ExportInfo.IntervalForProgressReport = 100;

        // Subscribe to progress events
        mb.ExportProgressChanged += Mb_ExportProgressChanged;

        mb.ExportToFile(filePathSql);
    }
}

private void Mb_ExportProgressChanged(object sender, ExportProgressArgs e)
{
    // Rich progress information available
    string currentTable = e.CurrentTableName;
    int totalTables = e.TotalTables;
    int currentTableIndex = e.CurrentTableIndex;
    long totalRows = e.TotalRowsInAllTables;
    long currentRows = e.CurrentRowIndexInAllTables;
    long currentTableRows = e.TotalRowsInCurrentTable;
    long currentTableProgress = e.CurrentRowIndexInCurrentTable;

    // Calculate completion percentage
    int percentCompleted = 0;
    if (e.CurrentRowIndexInAllTables > 0L && e.TotalRowsInAllTables > 0L)
    {
        if (e.CurrentRowIndexInAllTables >= e.TotalRowsInAllTables)
        {
            percentCompleted = 100;
        }
        else
        {
            percentCompleted = (int)(e.CurrentRowIndexInAllTables * 100L / e.TotalRowsInAllTables);
        }
    }
}


Restore Progress Monitoring
void Restore()
{
    using (var conn = config.GetNewConnection())
    using (var cmd = conn.CreateCommand())
    using (var mb = new MySqlBackup(cmd))
    {
        conn.Open();

        // Set update frequency
        mb.ImportInfo.IntervalForProgressReport = 100;

        // Subscribe to progress events
        mb.ImportProgressChanged += Mb_ImportProgressChanged;

        mb.ImportFromFile(filePathSql);
    }
}

private void Mb_ImportProgressChanged(object sender, ImportProgressArgs e)
{
    // Byte-based progress tracking
    long totalBytes = e.TotalBytes;
    long currentBytes = e.CurrentBytes;

    // Calculate completion percentage
    int percentCompleted = 0;
    if (e.CurrentBytes > 0L && e.TotalBytes > 0L)
    {
        if (e.CurrentBytes >= e.TotalBytes)
        {
            percentCompleted = 100;
        }
        else
        {
            percentCompleted = (int)(e.CurrentBytes * 100L / e.TotalBytes);
        }
    }
}


Choosing the Right Update Frequency
The update interval significantly impacts your application’s responsiveness:
// Configure update frequency based on your needs:

// Desktop/Local Applications (low latency)
mb.ExportInfo.IntervalForProgressReport = 100;  // 10 updates/second
mb.ImportInfo.IntervalForProgressReport = 250;  // 4 updates/second

// Web Applications (higher latency)
mb.ExportInfo.IntervalForProgressReport = 500;  // 2 updates/second
mb.ImportInfo.IntervalForProgressReport = 1000; // 1 update/second


Performance Guidelines:

  • 100ms: Perfect for desktop applications – very responsive
  • 250ms: Great balance for most applications
  • 500ms: Ideal for web applications with moderate traffic
  • 1000ms: Best for high-traffic web applications

Architecture: The Two-Thread Pattern
Understanding the architecture is crucial for building robust progress reporting:

MySqlBackup.NET Internal Threads

  1. Main Process Thread: Executes actual backup/restore operations
  2. Progress Reporting Thread: Periodically reports progress via timer events

Your Application Threads

  • Backend Thread: Handles MySqlBackup.NET operations and caches progress data
  • UI Thread: Retrieves cached data and updates the user interface

Creating the Progress Data Model
In your application, you need to write your own class to hold the progress values at your side. Here’s a comprehensive class to cache all progress information
class ProgressReportTask
{
    public int ApiCallIndex { get; set; }
    public int TaskId { get; set; }
    public int TaskType { get; set; }  // 1 = backup, 2 = restore
    public string FileName { get; set; }
    public string SHA256 { get; set; }

    // Task status tracking
    public bool IsStarted { get; set; }
    public bool IsCompleted { get; set; } = false;
    public bool IsCancelled { get; set; } = false;
    public bool RequestCancel { get; set; } = false;
    public bool HasError { get; set; } = false;
    public string ErrorMsg { get; set; } = "";

    // Time tracking
    public DateTime TimeStart { get; set; } = DateTime.MinValue;
    public DateTime TimeEnd { get; set; } = DateTime.MinValue;
    public TimeSpan TimeUsed { get; set; } = TimeSpan.Zero;

    // Backup-specific progress data
    public int TotalTables { get; set; } = 0;
    public int CurrentTableIndex { get; set; } = 0;
    public string CurrentTableName { get; set; } = "";
    public long TotalRowsCurrentTable { get; set; } = 0;
    public long CurrentRowCurrentTable { get; set; } = 0;
    public long TotalRows { get; set; } = 0;
    public long CurrentRowIndex { get; set; } = 0;

    // Restore-specific progress data
    public long TotalBytes { get; set; } = 0;
    public long CurrentBytes { get; set; } = 0;

    public int PercentCompleted { get; set; } = 0;

    // UI-friendly properties
    [JsonPropertyName("TaskTypeName")]
    public string TaskTypeName
    {
        get
        {
            return TaskType switch
            {
                1 => "Backup",
                2 => "Restore",
                _ => "Unknown"
            };
        }
    }

    [JsonPropertyName("TimeStartDisplay")]
    public string TimeStartDisplay => TimeStart.ToString("yyyy-MM-dd HH:mm:ss");

    [JsonPropertyName("TimeEndDisplay")]
    public string TimeEndDisplay => TimeEnd.ToString("yyyy-MM-dd HH:mm:ss");

    [JsonPropertyName("TimeUsedDisplay")]
    public string TimeUsedDisplay => TimeDisplayHelper.TimeSpanToString(TimeUsed);

    [JsonPropertyName("FileDownloadUrl")]
    public string FileDownloadUrl
    {
        get
        {
            if (!string.IsNullOrEmpty(FileName))
            {
                return $"/apiFiles?folder=backup&filename={FileName}";
            }
            return "";
        }
    }
}


Building the User Interface – Framework Agnostic Approach
Create a new blank ASP.NET Web Forms page and delete everything from the frontend markup, leaving only the page directive declaration at the top:
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="apiProgressReport.aspx.cs" Inherits="System.pages.apiProgressReport" %>

Route this page to /apiProgressReport using your preferred routing method.

Main API Controller

// Thread-safe progress cache
static ConcurrentDictionary<int, ProgressReportTask> dicTask = new ConcurrentDictionary<int, ProgressReportTask>();

protected void Page_Load(object sender, EventArgs e)
{
    if (!IsUserAuthorized())
    {
        Response.StatusCode = 401;
        Response.Write("0|Unauthorized access");
        Response.End();
        return;
    }

    string action = (Request["action"] + "").ToLower();

    switch (action)
    {
        case "backup":
            Backup();
            break;
        case "restore":
            Restore();
            break;
        case "stoptask":
            StopTask();
            break;
        case "gettaskstatus":
            GetTaskStatus();
            break;
        default:
            Response.StatusCode = 400;
            Response.Write("0|Invalid action");
            break;
    }
}

bool IsUserAuthorized()
{
    // Implement your authentication logic here
    // Check user login and backup permissions
    return true; // Simplified for demo
}


With this setup, the front end can access the api in 2 main ways

Example 1: by using query string (the get request), simple direct:
/apiProgressReport?action=backup
/apiProgressReport?action=restore
/apiProgressReport?action=stoptask
/apiProgressReport?action=gettaskstatus

Example 2: by using post request through fetchapi or xmlhttprequest in a nutshell, the javascript fetchapi will look something like this:
// Javascript

// backup
const formData = new FormData();
formData.append('action', 'backup');
const response = await fetch('/apiProgressReport', {
    method: 'POST',
    body: formData,
    credentials: 'include'
});

// restore
const formData = new FormData();
formData.append('action', 'restore');
// attach and send the file to server
formData.append('fileRestore', fileRestore.files[0]);
const response = await fetch('/apiProgressReport', {
    method: 'POST',
    body: formData,
    credentials: 'include'
});

Backup Implementation
void Backup()
{
    var taskId = GetNewTaskId();

    // Start task asynchronously (fire and forget pattern)
    _ = Task.Run(() => { BeginExport(taskId); });

    // Immediately return task ID to frontend
    Response.Write(taskId.ToString());
}

int thisTaskId = 0;

void BeginExport(int newTaskId)
{
    thisTaskId = newTaskId;

    ProgressReportTask task = new ProgressReportTask()
    {
        TaskId = newTaskId,
        TaskType = 1, // backup
        TimeStart = DateTime.Now,
        IsStarted = true
    };

    dicTask[newTaskId] = task;

    try
    {
        string folder = Server.MapPath("~/App_Data/backup");
        Directory.CreateDirectory(folder);

        string fileName = $"backup-{DateTime.Now:yyyy-MM-dd_HHmmss}.sql";
        string filePath = Path.Combine(folder, fileName);

        using (var conn = config.GetNewConnection())
        using (var cmd = conn.CreateCommand())
        using (var mb = new MySqlBackup(cmd))
        {
            conn.Open();
            mb.ExportInfo.IntervalForProgressReport = 100;
            mb.ExportProgressChanged += Mb_ExportProgressChanged;
            mb.ExportToFile(filePath);
        }

        // Handle cancellation or completion
        if (task.RequestCancel)
        {
            task.IsCancelled = true;
            try
            {
                if (File.Exists(filePath))
                    File.Delete(filePath);
            }
            catch { }
        }
        else
        {
            // Successfully completed
            task.FileName = fileName;
            task.SHA256 = Sha256.Compute(filePath);
        }

        task.TimeEnd = DateTime.Now;
        task.TimeUsed = DateTime.Now - task.TimeStart;
        task.IsCompleted = true;
    }
    catch (Exception ex)
    {
        task.HasError = true;
        task.ErrorMsg = ex.Message;
        task.TimeEnd = DateTime.Now;
        task.IsCompleted = true;
    }
}

private void Mb_ExportProgressChanged(object sender, ExportProgressArgs e)
{
    if (dicTask.TryGetValue(thisTaskId, out var task))
    {
        // Update all progress information
        task.CurrentTableName = e.CurrentTableName;
        task.TotalTables = e.TotalTables;
        task.CurrentTableIndex = e.CurrentTableIndex;
        task.TotalRows = e.TotalRowsInAllTables;
        task.CurrentRowIndex = e.CurrentRowIndexInAllTables;
        task.TotalRowsCurrentTable = e.TotalRowsInCurrentTable;
        task.CurrentRowCurrentTable = e.CurrentRowIndexInCurrentTable;

        // Calculate percentage
        if (e.CurrentRowIndexInAllTables > 0L && e.TotalRowsInAllTables > 0L)
        {
            if (e.CurrentRowIndexInAllTables >= e.TotalRowsInAllTables)
            {
                task.PercentCompleted = 100;
            }
            else
            {
                task.PercentCompleted = (int)(e.CurrentRowIndexInAllTables * 100L / e.TotalRowsInAllTables);
            }
        }
        else
        {
            task.PercentCompleted = 0;
        }

        // Handle cancellation requests
        if (task.RequestCancel)
        {
            ((MySqlBackup)sender).StopAllProcess();
        }
    }
}

Restore Implementation
void Restore()
{
    var taskId = GetNewTaskId();

    ProgressReportTask task = new ProgressReportTask()
    {
        TaskId = taskId,
        TaskType = 2, // restore
        TimeStart = DateTime.Now,
        IsStarted = true
    };

    dicTask[taskId] = task;

    // Validate uploaded file
    if (Request.Files.Count == 0)
    {
        task.IsCompleted = true;
        task.HasError = true;
        task.ErrorMsg = "No file uploaded";
        task.TimeEnd = DateTime.Now;
        return;
    }

    string fileExtension = Request.Files[0].FileName.ToLower().Trim();
    if (!fileExtension.EndsWith(".zip") && !fileExtension.EndsWith(".sql"))
    {
        task.IsCompleted = true;
        task.HasError = true;
        task.ErrorMsg = "Invalid file type. Only .zip or .sql files are supported.";
        task.TimeEnd = DateTime.Now;
        return;
    }

    // Save uploaded file
    string folder = Server.MapPath("~/App_Data/backup");
    Directory.CreateDirectory(folder);
    string fileName = $"restore-{DateTime.Now:yyyy-MM-dd_HHmmss}";
    string filePath = Path.Combine(folder, fileName + ".sql");

    if (fileExtension.EndsWith(".zip"))
    {
        string zipPath = Path.Combine(folder, fileName + ".zip");
        Request.Files[0].SaveAs(zipPath);
        ZipHelper.ExtractFile(zipPath, filePath);
        task.FileName = fileName + ".zip";
    }
    else
    {
        Request.Files[0].SaveAs(filePath);
        task.FileName = fileName + ".sql";
    }

    // Start restore process asynchronously
    _ = Task.Run(() => { BeginRestore(taskId, filePath); });

    Response.Write(taskId.ToString());
}

void BeginRestore(int newTaskId, string filePath)
{
    thisTaskId = newTaskId;

    if (dicTask.TryGetValue(thisTaskId, out ProgressReportTask task))
    {
        try
        {
            task.FileName = Path.GetFileName(filePath);
            task.SHA256 = Sha256.Compute(filePath);

            using (var conn = config.GetNewConnection())
            using (var cmd = conn.CreateCommand())
            using (var mb = new MySqlBackup(cmd))
            {
                conn.Open();
                mb.ImportInfo.IntervalForProgressReport = 100;
                mb.ImportProgressChanged += Mb_ImportProgressChanged;
                mb.ImportFromFile(filePath);
            }

            if (task.RequestCancel)
            {
                task.IsCancelled = true;
            }

            task.TimeEnd = DateTime.Now;
            task.TimeUsed = DateTime.Now - task.TimeStart;
            task.IsCompleted = true;
        }
        catch (Exception ex)
        {
            task.HasError = true;
            task.ErrorMsg = ex.Message;
            task.TimeEnd = DateTime.Now;
            task.TimeUsed = DateTime.Now - task.TimeStart;
            task.IsCompleted = true;
        }
    }
}

private void Mb_ImportProgressChanged(object sender, ImportProgressArgs e)
{
    if (dicTask.TryGetValue(thisTaskId, out var task))
    {
        task.TotalBytes = e.TotalBytes;
        task.CurrentBytes = e.CurrentBytes;

        // Calculate percentage
        if (e.CurrentBytes > 0L && e.TotalBytes > 0L)
        {
            if (e.CurrentBytes >= e.TotalBytes)
            {
                task.PercentCompleted = 100;
            }
            else
            {
                task.PercentCompleted = (int)(e.CurrentBytes * 100L / e.TotalBytes);
            }
        }
        else
        {
            task.PercentCompleted = 0;
        }

        // Handle cancellation requests
        if (task.RequestCancel)
        {
            ((MySqlBackup)sender).StopAllProcess();
        }
    }
}


Task Control Methods

void StopTask()
{
    if (int.TryParse(Request["taskid"] + "", out int taskId))
    {
        if (dicTask.TryGetValue(taskId, out ProgressReportTask task))
        {
            task.RequestCancel = true;
            Response.Write("1");
        }
        else
        {
            Response.Write("0|Task not found");
        }
    }
    else
    {
        Response.Write("0|Invalid task ID");
    }
}

void GetTaskStatus()
{
    if (int.TryParse(Request["apicallid"] + "", out int apiCallId))
    {
        if (int.TryParse(Request["taskid"] + "", out int taskId))
        {
            if (dicTask.TryGetValue(taskId, out ProgressReportTask task))
            {
                task.ApiCallIndex = apiCallId;

                string json = JsonSerializer.Serialize(task);
                Response.Clear();
                Response.ContentType = "application/json";
                Response.Write(json);
            }
        }
    }
}

int GetNewTaskId()
{
    // Use Guid to prevent collisions
    return Math.Abs(Guid.NewGuid().GetHashCode());
}

Handling Late Echo Responses
The apiCallId system prevents UI corruption from network latency issues:

Normal scenario:
Call 1 → Response 1: 5%
Call 2 → Response 2: 10%
Call 3 → Response 3: 15%

Latency scenario:
Call 1 → (delayed)
Call 2 → Response 2: 10%
Call 3 → Response 3: 15%
        ↓
Response 1: 5% (ignored due to old apiCallId)

Building the Frontend
Basic Value Container

You can use <span> as the container for the values:
// data that is manually handled by user
<span id="labelTaskId"></span>
<span id="labelPercent">0</span>
<span id="lableTimeStart"></span>
<span id="lableTimeEnd"></span>
<span id="lableTimeElapse"></span>
<span id="labelSqlFilename"></span> // the download webpath for the generated sql dump file
<span id="labelSha256"></span> // the SHA 256 checksum for the generate file

// typical status: running, completed, error, cancelled
<span id="labelTaskStatus"></span>

// the following data fields provided by mysqlbackup.net during the progress change events:

// fields for backup used
<span id="labelCurTableName"></span>
<span id="labelCurTableIndex"></span>
<span id="labelTotalTables"></span>
<span id="labelCurrentRowsAllTables"></span>
<span id="labelTotalRowsAllTable">0</span>
<span id="labelCurrentRowsCurrentTables"></span>
<span id="labelTotalRowsCurrentTable">0</span>

// fields for restore used
<span id="labelCurrentBytes"></span>
<span id="lableTotalBytes"></span>

HTML Structure
Example of a comprehensive progress display:
<!-- Progress Bar -->
<div id="progress_bar_container">
    <div id="progress_bar_indicator">
        <span id="labelPercent">0</span> %
    </div>
</div>

<!-- Control Buttons -->
<div class="controls">
    <button type="button" onclick="backup();">Backup Database</button>
    <button type="button" onclick="restore();">Restore Database</button>
    <button type="button" onclick="stopTask();">Stop Current Task</button>

    <label for="fileRestore">Select Restore File:</label>
    <input type="file" id="fileRestore" accept=".sql,.zip" />
</div>

<!-- Detailed Status Display -->
<div class="task-status">
    <table>
        <tr>
            <td>Task ID</td>
            <td><span id="labelTaskId">--</span></td>
        </tr>
        <tr>
            <td>Status</td>
            <td>
                <span id="labelTaskStatus">Ready</span>
                <span id="labelTaskMessage"></span>
            </td>
        </tr>
        <tr>
            <td>Time</td>
            <td>
                Start: <span id="labelTimeStart">--</span> |
                End: <span id="labelTimeEnd">--</span> |
                Duration: <span id="labelTimeElapsed">--</span>
            </td>
        </tr>
        <tr>
            <td>File</td>
            <td>
                <span id="labelSqlFilename">--</span><br>
                SHA256: <span id="labelSha256">--</span>
            </td>
        </tr>

        <!-- Backup-specific fields -->
        <tr class="backup-only">
            <td>Current Table</td>
            <td>
                <span id="labelCurrentTableName">--</span>
                (<span id="labelCurrentTableIndex">--</span> / <span id="labelTotalTables">--</span>)
            </td>
        </tr>
        <tr class="backup-only">
            <td>All Tables Progress</td>
            <td>
                <span id="labelCurrentRowsAllTables">--</span> / <span id="labelTotalRowsAllTables">--</span>
            </td>
        </tr>
        <tr class="backup-only">
            <td>Current Table Progress</td>
            <td>
                <span id="labelCurrentRowsCurrentTable">--</span> / <span id="labelTotalRowsCurrentTable">--</span>
            </td>
        </tr>

        <!-- Restore-specific fields -->
        <tr class="restore-only">
            <td>Progress</td>
            <td>
                <span id="labelCurrentBytes">--</span> / <span id="labelTotalBytes">--</span> bytes
            </td>
        </tr>
    </table>
</div>

JavaScript Implementation
Initialization and Variables

// Global variables
let taskId = 0;
let apiCallId = 0;
let intervalTimer = null;
let intervalMs = 1000;

// Cache DOM elements for better performance
const elements = {
    fileRestore: document.querySelector("#fileRestore"),
    progressBar: document.querySelector("#progress_bar_indicator"),
    labelPercent: document.querySelector("#labelPercent"),
    labelTaskId: document.querySelector("#labelTaskId"),
    labelTimeStart: document.querySelector("#labelTimeStart"),
    labelTimeEnd: document.querySelector("#labelTimeEnd"),
    labelTimeElapsed: document.querySelector("#labelTimeElapsed"),
    labelTaskStatus: document.querySelector("#labelTaskStatus"),
    labelTaskMessage: document.querySelector("#labelTaskMessage"),
    labelSqlFilename: document.querySelector("#labelSqlFilename"),
    labelSha256: document.querySelector("#labelSha256"),

    // Backup-specific elements
    labelCurrentTableName: document.querySelector("#labelCurrentTableName"),
    labelCurrentTableIndex: document.querySelector("#labelCurrentTableIndex"),
    labelTotalTables: document.querySelector("#labelTotalTables"),
    labelCurrentRowsAllTables: document.querySelector("#labelCurrentRowsAllTables"),
    labelTotalRowsAllTables: document.querySelector("#labelTotalRowsAllTables"),
    labelCurrentRowsCurrentTable: document.querySelector("#labelCurrentRowsCurrentTable"),
    labelTotalRowsCurrentTable: document.querySelector("#labelTotalRowsCurrentTable"),

    // Restore-specific elements
    labelCurrentBytes: document.querySelector("#labelCurrentBytes"),
    labelTotalBytes: document.querySelector("#labelTotalBytes")
};


Core Functions
Backup, Restore, Stop
async function backup() {
    resetUIValues();

    const formData = new FormData();
    formData.append('action', 'backup');

    const result = await fetchData(formData);

    if (result.ok) {
        taskId = result.thisTaskid;
        intervalMs = 1000;
        startIntervalTimer();
        showSuccessMessage("Backup Started", "Database backup has begun successfully");
    } else {
        showErrorMessage("Backup Failed", result.errMsg);
        stopIntervalTimer();
    }
}

async function restore() {
    resetUIValues();

    if (!elements.fileRestore.files || elements.fileRestore.files.length === 0) {
        showErrorMessage("No File Selected", "Please select a backup file to restore");
        return;
    }

    const formData = new FormData();
    formData.append('action', 'restore');
    formData.append('fileRestore', elements.fileRestore.files[0]);

    const result = await fetchData(formData);

    if (result.ok) {
        taskId = result.thisTaskid;
        intervalMs = 1000;
        startIntervalTimer();
        showSuccessMessage("Restore Started", "Database restore has begun successfully");
    } else {
        showErrorMessage("Restore Failed", result.errMsg);
        stopIntervalTimer();
    }
}

async function stopTask() {
    if (!taskId || taskId === 0) {
        showErrorMessage("No Active Task", "There is no running task to stop");
        return;
    }

    const formData = new FormData();
    formData.append("action", "stoptask");
    formData.append("taskid", taskId);

    const result = await fetchData(formData);

    if (result.ok) {
        showSuccessMessage("Stop Requested", "The task is being cancelled...");
    } else {
        showErrorMessage("Stop Failed", result.errMsg);
        stopIntervalTimer();
    }
}


Progress Monitoring
async function fetchTaskStatus() {
    apiCallId++;

    const formData = new FormData();
    formData.append('action', 'gettaskstatus');
    formData.append('taskid', taskId);
    formData.append('apicallid', apiCallId);

    const result = await fetchData(formData);

    if (result.ok) {
        // Ignore late responses
        if (result.jsonObject.ApiCallIndex !== apiCallId) {
            return;
        }
        updateUIValues(result.jsonObject);
    } else {
        showErrorMessage("Status Fetch Failed", result.errMsg);
        stopIntervalTimer();
    }
}

function updateUIValues(data) {
    // Optimize update frequency when task starts
    if (data.PercentCompleted > 0 && intervalMs === 1000) {

        // change the timer interval time
        intervalMs = 100;
        stopIntervalTimer();
        setTimeout(startIntervalTimer, 500);
    }

    // Stop monitoring when task completes
    if (data.IsCompleted || data.HasError || data.IsCancelled) {
        stopIntervalTimer();
    }

    // Update basic information
    elements.labelTaskId.textContent = data.TaskId || "--";
    elements.labelTimeStart.textContent = data.TimeStartDisplay || "--";
    elements.labelTimeEnd.textContent = data.TimeEndDisplay || "--";
    elements.labelTimeElapsed.textContent = data.TimeUsedDisplay || "--";

    // Update progress bar
    const percent = data.PercentCompleted || 0;
    elements.labelPercent.style.display = "block";
    elements.labelPercent.textContent = percent;
    elements.progressBar.style.width = percent + '%';

    // Update status with visual indicators
    const statusContainer = elements.labelTaskStatus.closest('td');

    if (data.HasError) {
        elements.labelTaskStatus.textContent = "Error";
        elements.labelTaskMessage.textContent = data.ErrorMsg || "";
        statusContainer.className = "status-error";
        showErrorMessage("Task Failed", data.ErrorMsg || "Unknown error occurred");
    } else if (data.IsCancelled) {
        elements.labelTaskStatus.textContent = "Cancelled";
        elements.labelTaskMessage.textContent = "";
        statusContainer.className = "status-cancelled";
        showWarningMessage("Task Cancelled", "The operation was cancelled by user request");
    } else if (data.IsCompleted) {
        elements.labelTaskStatus.textContent = "Completed";
        elements.labelTaskMessage.textContent = "";
        statusContainer.className = "status-complete";
        showSuccessMessage("Task Completed", "Operation finished successfully!");
    } else {
        elements.labelTaskStatus.textContent = "Running";
        elements.labelTaskMessage.textContent = "";
        statusContainer.className = "status-running";
    }

    // Update file information
    if (data.FileName && data.FileName.length > 0) {
        elements.labelSqlFilename.innerHTML =
            `File: <a href='${data.FileDownloadUrl}' class='download-link' target='_blank'>Download ${data.FileName}</a>`;
    } else {
        elements.labelSqlFilename.textContent = data.FileName || "--";
    }
    elements.labelSha256.textContent = data.SHA256 || "--";

    // Update backup-specific information
    if (data.TaskType === 1) {
        elements.labelCurrentTableName.textContent = data.CurrentTableName || "--";
        elements.labelCurrentTableIndex.textContent = data.CurrentTableIndex || "--";
        elements.labelTotalTables.textContent = data.TotalTables || "--";
        elements.labelCurrentRowsAllTables.textContent = data.CurrentRowIndex || "--";
        elements.labelTotalRowsAllTables.textContent = data.TotalRows || "--";
        elements.labelCurrentRowsCurrentTable.textContent = data.CurrentRowCurrentTable || "--";
        elements.labelTotalRowsCurrentTable.textContent = data.TotalRowsCurrentTable || "--";

        // Hide restore-specific fields
        elements.labelCurrentBytes.textContent = "--";
        elements.labelTotalBytes.textContent = "--";
    }

    // Update restore-specific information
    if (data.TaskType === 2) {
        elements.labelCurrentBytes.textContent = formatBytes(data.CurrentBytes) || "--";
        elements.labelTotalBytes.textContent = formatBytes(data.TotalBytes) || "--";

        // Hide backup-specific fields
        elements.labelCurrentTableName.textContent = "--";
        elements.labelCurrentTableIndex.textContent = "--";
        elements.labelTotalTables.textContent = "--";
        elements.labelCurrentRowsAllTables.textContent = "--";
        elements.labelTotalRowsAllTables.textContent = "--";
        elements.labelCurrentRowsCurrentTable.textContent = "--";
        elements.labelTotalRowsCurrentTable.textContent = "--";
    }
}


Utility Functions
// Centralized API handler
async function fetchData(formData) {
    try {
        let action = formData.get("action") || "";

        const response = await fetch('/apiProgressReport', {
            method: 'POST',
            body: formData,
            credentials: 'include'
        });

        if (response.ok) {
            const responseText = await response.text();

            if (responseText.startsWith("0|")) {
                let err = responseText.substring(2);
                return { ok: false, errMsg: err };
            }
            else {
                if (!responseText || responseText.trim() === '') {
                    let err = "Empty response from server";
                    return { ok: false, errMsg: err };
                }
                else {
                    // Success
                    if (action == "backup" || action == "restore") {
                        let _thisTaskid = parseInt(responseText);
                        if (isNaN(_thisTaskid)) {
                            let err = `Invalid Task ID: ${_thisTaskid}`;
                            return { ok: false, errMsg: err };
                        }
                        else {
                            return { ok: true, thisTaskid: _thisTaskid };
                        }
                    }
                    else if (action == "stoptask") {
                        if (responseText == "1") {
                            return { ok: true };
                        }
                        else {
                            let err = `Unable to stop task`;
                            return { ok: false, errMsg: err };
                        }
                    }
                    else if (action == "gettaskstatus") {
                        try {
                            let thisJsonObject = JSON.parse(responseText);
                            return { ok: true, jsonObject: thisJsonObject };
                        }
                        catch (err) {
                            return { ok: false, errMsg: err };
                        }
                    }
                }
            }
        }
        else {
            const err = await response.text();
            return { ok: false, errMsg: err };
        }
    }
    catch (err) {
        return { ok: false, errMsg: err };
    }
}

function resetUIValues() {
    // Reset all display elements
    Object.values(elements).forEach(element => {
        if (element && element.textContent !== undefined) {
            element.textContent = "--";
        }
    });

    // Reset progress bar
    elements.progressBar.style.width = '0%';
    elements.labelPercent.style.display = "none";
    elements.labelPercent.textContent = "0";

    // Reset status styling
    const statusContainer = elements.labelTaskStatus.closest('td');
    if (statusContainer) {
        statusContainer.className = "";
    }
}

function startIntervalTimer() {
    stopIntervalTimer();
    intervalTimer = setInterval(fetchTaskStatus, intervalMs);
}

function stopIntervalTimer() {
    if (intervalTimer) {
        clearInterval(intervalTimer);
        intervalTimer = null;
    }
}

function formatBytes(bytes) {
    if (!bytes || bytes === 0) return '0 Bytes';

    const sizes = ['Bytes', 'KB', 'MB', 'GB'];
    const i = Math.floor(Math.log(bytes) / Math.log(1024));

    return Math.round(bytes / Math.pow(1024, i) * 100) / 100 + ' ' + sizes[i];
}

// Message display functions (implement based on your UI framework)
function showSuccessMessage(title, message) {
    console.log(` ${title}: ${message}`);
    // Implement your success notification here
}

function showErrorMessage(title, message) {
    console.error(` ${title}: ${message}`);
    // Implement your error notification here
}

function showWarningMessage(title, message) {
    console.warn(` ${title}: ${message}`);
    // Implement your warning notification here
}



European SQL Server 2022 Hosting :: Unlocking Business Potential: The Foundations of Successful Data Resource Administration

clock July 23, 2025 09:56 by author Peter

Data is a strategic asset in today's information-driven economy, not just a byproduct of operations. Organizations may innovate, maximize performance, and make well-informed decisions that support growth when data is handled methodically. However, achieving this promise necessitates a disciplined approach to Data Resource Management (DRM) in addition to technology investment.

Managing Data as a Strategic Resource
Treating data as a valuable resource begins with a mindset shift. Like financial or human capital, data deserves deliberate stewardship. This includes:

  • Inventory and Classification: Cataloging data assets based on sensitivity, usage, and business value.
  • Ownership and Accountability: Defining clear roles for data stewardship across departments.
  • Value Realization: Linking data to outcomes—such as improved customer experience, reduced risk, and operational efficiency.

By embedding these principles in organizational culture, enterprises align data usage with business strategy.

Data Governance: Building Trust Through Structure
Governance ensures that data remains secure, compliant, and aligned with organizational objectives. Key components include:

  • Policy Frameworks: Establishing rules for data usage, sharing, and retention.
  • Compliance Assurance: Meeting regulatory requirements (GDPR, HIPAA, etc.) across jurisdictions.
  • Decision Rights: Clarifying who can make decisions about data quality, access, and modification.

Effective governance is not a one-time initiative—it is an evolving framework that adapts to technological and regulatory change.

Data Quality Assurance and Cleansing

High-quality data is foundational to reliable insights. Quality assurance and cleansing techniques address structural and semantic inconsistencies:

  • Data Profiling: Evaluating datasets to detect anomalies and redundancies.
  • Validation Rules: Implementing checks for accuracy, completeness, and consistency.
  • Cleansing Strategies: Removing duplicate records, correcting formatting errors, and standardizing entries.

Proactive assurance mechanisms reduce analytical blind spots and improve downstream data reliability.

Data Integration Across Ecosystems
Seamless integration of data across platforms is vital for a unified business view. Effective integration practices include:

  • ETL/ELT Processes: Extracting, transforming, and loading data from heterogeneous sources into a central repository.
  • API-Driven Architecture: Facilitating real-time data flow between applications and services.
  • Semantic Harmonization: Aligning data definitions and schemas to ensure consistency across systems.

Integrated data supports enterprise-wide decision-making and eliminates silos.

Data Storage Optimization
Efficient storage strategies ensure scalability and cost-effectiveness. Key considerations involve:

  • Tiered Storage Models: Assigning data to storage media based on access frequency and business value.
  • Compression Techniques: Reducing file size without losing fidelity to optimize space.
  • Retention and Archival Policies: Automating data lifecycle transitions to minimize clutter and ensure compliance.

Storage optimization balances performance with sustainability, especially in data-intensive environments.

Data Visualization Techniques
Visual representation converts complex datasets into intuitive insights. Common techniques include:

  • Dashboards and Reports: Real-time overviews of KPIs tailored for executive and operational users.
  • Interactive Charts: Drill-down capabilities using bar graphs, pie charts, scatter plots, and heatmaps.
  • Geospatial Mapping: Visualizing data across geographic dimensions for logistics and market intelligence.

Effective visualization bridges the gap between raw data and strategic understanding.

Ensuring Integrity and Accessibility
Data integrity and accessibility are twin engines of operational continuity and user empowerment:

  • Data Integrity: Protects against unauthorized changes, corruption, or loss through encryption, backups, and versioning.
  • Accessibility: Ensures users can retrieve necessary data—securely and efficiently—through robust access control and search capabilities.

Well-structured access policies preserve trust while empowering business users.

Strategic Stewardship in the Digital Age
Organizations must make investments in comprehensive DRM procedures that cut across departmental lines if they want to fully utilize data. Data becomes a dynamic force for innovation and resilience when it is supported by robust governance, reliable quality standards, smooth integration, and insightful visualizations.

HostForLIFEASP.NET SQL Server 2022 Hosting



SQL Server Hosting - HostForLIFE :: Moving Information Between Tables in Different Databases

clock July 17, 2025 08:14 by author Peter

One common task that a Database Administrator performs is the movement of data from one database to another. This scenario often arises when organizations need to consolidate data from multiple sources, migrate data to a data warehouse, or perform data analytics on a set of data stored in different databases. A classic scenario is ingesting data from one table in a database to another table in a different database, especially when both tables have identical columns. In this article, we'll explore the steps to accomplish this task effectively in SQL Server Management Studio

Establish the Database Connection
Before you can transfer data between tables in different databases, you must ensure that you have the necessary permissions and connectivity to access both databases, which often involves configuring connection settings, including server addresses, authentication credentials, and database names. 

Understand the Tables
It is important to ensure that you have a clear understanding of both the source and destination tables. In our case, the tables have the same columns, which simplifies the process. However, you should still verify that the column names, data types, and constraints match between the two tables. Any discrepancies can lead to data integrity issues during the transfer.

Write & Execute the SQL Query
To perform the data transfer, you'll need to write the SQL query by using the INSERT INTO ... SELECT the statement, which is commonly used for that purpose. The query should specify the source table, the columns to be transferred, and the destination table. Once you've written the SQL query, execute it by clicking on the Execute button. In our example, we want to transport the data in the DimCustomer table of the SalesData database into the DimCustomer table in the data warehouse database. Both tables have the same table structure.

In the screenshot below, we wrote a simple query to fetch all the records from that table.
SELECT * FROM DimensionCustomer;

In the next screenshot below, we switched to the destination database: DataWarehouse, and we selected all the columns in the dimCustomer table without any data ingested thus far. This is necessary to verify the structure of the destination table.

In the next screenshot below, we wrote and executed the query, and from the message displayed, a total of 18,484 records were inserted into the destination table.

INSERT INTO DataWarehouse.dbo.DimCustomer (
    CustomerKey,
    FirstName,
    LastName,
    BirthDate,
    MaritalStatus,
    Gender,
    EmailAddress,
    AnnualIncome,
    NumberofChildren,
    EducationalLevel,
    Address,
    PhoneNumber,
    FirstTransactionDate
)
SELECT
    CustomerKey,
    FirstName,
    LastName,
    BirthDate,
    MaritalStatus,
    Gender,
    EmailAddress,
    AnnualIncome,
    NumberofChildren,
    EducationalLevel,
    Address,
    PhoneNumber,
    FirstTransactionDate
FROM SalesData.dbo.DimensionCustomer;

Verify the Results
After the query execution is complete, it's essential to verify the results. To verify, we executed a simple query mentioned below, and all the records were returned.
SELECT * FROM dimCustomer;

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: ASP.NET's DataList Control for SQL CRUD Operations

clock July 10, 2025 09:28 by author Peter

Since this is all about using Stored Procedures to conduct actions on SQL tables, I will walk you through each step of using a DataList control to insert, select, edit, cancel, update, and remove data.

First Step: SQL Database
To add a data value to an EmployeeData table, first build a table in a database store in the manner shown below.

Table: EmployeeData
    Create Table EmployeeData (  
    EmpID int identity (1, 1) Primary Key,  
    EmpName varchar(30),  
    Contact nchar(15),  
    EmailId nvarchar(50)  
    )  

Stored Procedure:
Also create an insert procedure, a select procedure, an update procedure and a delete procedure as in the following.  
    Select procedure for DataList control.
        create procedure sp_FillData  
        As  
        Begin  
        set nocount on;  
        select EmpID, EmpName, Contact, EmailID from EmployeeData  
        End  
    Insert procedure.
        create procedure sp_InsertEmployeeData @EmpName varchar(30),  
        @Contact nchar(15),  
        @EmailId nvarchar(50) As Begin  
        set  
        nocount on;  
        Insert into dbo.EmployeeData (EmpName, Contact, EmailId)  
        values  
        (@EmpName, @Contact, @EmailId) End  
    Delete procedure.
        Create procedure sp_DeleteEmployeeData  
        @EmpID int  
        As  
        Begin  
        set nocount on;  
        Delete from EmployeeData where EmpID=@EmpID  
        End  
    Select procedure.
        create procedure sp_SelectEmployeeData  
        @EmpID int  
        As  
        Begin  
        set nocount on;  
        select EmpID, EmpName, Contact, EmailID from EmployeeData where EmpID=@EmpID  
        End  
    Update procedure.
        Create procedure sp_UpdateEmployeeData @EmpID int,  
        @EmpName varchar(30),  
        @Contact nchar(15),  
        @EmailId nvarchar(50) As Begin  
        set  
        nocount off;  
        UPDATE  
        Employeedata  
        SET  
        EmpName = @EmpName,  
        Contact = @Contact,  
        EmailId = @EmailId  
        WHERE  
        EmpID = @EmpID End  

Step 2: Maintain database connection
Provide the database connection string inside the project's Web.Config as in the following:
    <connectionStrings>  
       <add name="connstring" connectionString="Data Source=RAKESH-PC;Initial Catalog=SqlServerTech;User ID=sa;Password=password" providerName="System.Data.SqlClient"/>  
    </connectionStrings>  


Step 3: UI design
In Visual Studio create a UI design using the following procedure:
Go to Solution Explorer.
Right-click on the project and click Add tab.
Click Add New Item as in the following:

Figure 1: Add web form 
Now I will write the design code inside DataList.aspx. In this page add some TextBox controls, Button controls and a DataList Control.
Also add two button controls and TextBox controls inside the DataList data row update and delete with DataList event Item commands. The following is the UI design code.
    <%@ Page Title="" Language="C#" MasterPageFile="~/Master/Master.master" AutoEventWireup="true" CodeFile="DataList.aspx.cs" Inherits="UI_DataList" %>  
      
    <asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server">  
         
    </asp:Content>  
      
    <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server">  
        <div>  
            <fieldset style="width: 269px" ><legend><b>DataList Example</b></legend>  
                <div style="width: 250px; background-color: #99FF66;">  
            <asp:Table runat="server">  
                <asp:TableRow>  
                    <asp:TableCell>Name</asp:TableCell><asp:TableCell><asp:TextBox ID="txtName" runat="server"></asp:TextBox ></asp:TableCell>  
                </asp:TableRow>  
      
                <asp:TableRow>  
                    <asp:TableCell>Contact</asp:TableCell><asp:TableCell><asp:TextBox ID="txtContact" runat="server"></asp:TextBox></asp:TableCell>  
                </asp:TableRow>  
      
                <asp:TableRow>  
                    <asp:TableCell>Email id</asp:TableCell><asp:TableCell><asp:TextBox ID="txtEmail" runat="server"></asp:TextBox></asp:TableCell>  
                </asp:TableRow>  
      
                <asp:TableRow>  
                    <asp:TableCell></asp:TableCell><asp:TableCell><asp:Button ID="btnSave" Text="Add Record" runat="server" OnClick="btnSave_Click" /></asp:TableCell>  
                </asp:TableRow>  
            </asp:Table>  
            </div>  
            </fieldset>  
            <br />  
        <fieldset style="width: 535px"><legend><b>Employee Information</b></legend>  
        <div style="background-color: #66FF66">  
        <asp:DataList ID="DataListEmp" runat="server"   
                 DataKeyField="EmpID"   
                 OnDeleteCommand="DataListEmp_DeleteCommand"   
                 OnEditCommand="DataListEmp_EditCommand"  
                 OnUpdateCommand="DataListEmp_UpdateCommand"   
                 OnCancelCommand="DataListEmp_CancelCommand" Width="527px" >  
                <HeaderTemplate>  
                <table><tr style="background-color: #800000; color: #FFFFFF">  
                <th>Name</th><th>Contact</th><th>Email ID</th><th>Action</th></tr>  
                </HeaderTemplate>  
                <ItemTemplate>  
                <tr >  
                <td><%# DataBinder.Eval(Container.DataItem, "EmpName")%></td>  
                <td><%# DataBinder.Eval(Container.DataItem,"Contact")%></td>  
                <td><%# DataBinder.Eval(Container.DataItem, "EmailId")%></td>  
                <td><asp:Button ID="imgbtnedit" runat="server" Text="Edit"  ToolTip="Edit" CommandName="Edit"/></td>  
                <td><asp:Button ID="btndelete" runat="server" Text="Delete" CommandName="Delete" ToolTip="Delete"/></td>  
                </tr>            
                </ItemTemplate>  
                <EditItemTemplate>             
                <tr>  
                 <td><asp:TextBox BackColor="Yellow" Font-Bold="true"  ID="txtName" runat="server" Text='<%# Eval("EmpName") %>'></asp:TextBox></td>  
                 <td><asp:TextBox BackColor="Yellow" Font-Bold="true" ID="txtContact" runat="server" Text='<%# Eval("Contact") %>'></asp:TextBox></td>  
                 <td><asp:TextBox BackColor="Yellow" Font-Bold="true" ID="txtEmail" runat="server" Text='<%# Eval("EmailId") %>'></asp:TextBox></td>  
                 <td><asp:Button ID="btnupdate" runat="server"  ToolTip="Update" Text="Update" CommandName="Update" /></td>  
                 <td><asp:Button ID="btncancel" runat="server"  ToolTip="Cancel" Text="Cancel" CommandName="Cancel" /></td>  
                </tr>  
                </EditItemTemplate>  
            </asp:DataList>  
            </div>  
            </fieldset>  
            </div>  
    </asp:Content>  


Step 4: UI code
Here is all about an operation on a DataList row records.
In this first write code for the TextBox value insertion into the SQL table by the preceding Insert Procedure, then write code for the inserted record display in the DalaList control. Write the code for two buttons, one is the delete and the second is an update button code.

Now create an update command, an edit command, a cancel command, an Item command according to the following Figure 3 and also maintain a datakeyfield in the property data section as in Figure 2. 

Figure 2: Add Data Key Field

Figure 3: Add Action Command
Code
    using System;  
    using System.Collections.Generic;  
    using System.Linq;  
    using System.Web;  
    using System.Web.UI;  
    using System.Web.UI.WebControls;  
    using System.Configuration;  
    using System.Data;  
    using System.Data.SqlClient;  
      
    public partial class UI_DataList : System.Web.UI.Page  
    {  
        string connection = ConfigurationManager.ConnectionStrings["connstring"].ConnectionString;  
        protected void Page_Load(object sender, EventArgs e)  
        {  
            if (!IsPostBack)  
            {  
                GetEmpDataList();  
            }  
        }  
        protected void btnSave_Click(object sender, EventArgs e)  
        {  
            using (SqlConnection con = new SqlConnection(connection))  
            {  
                using (SqlCommand cmd = new SqlCommand("sp_InsertEmployeeData", con))  
                {  
                    cmd.CommandType = CommandType.StoredProcedure;  
      
                    cmd.Parameters.AddWithValue("@EmpName", SqlDbType.VarChar).Value = txtName.Text.Trim();  
                    cmd.Parameters.AddWithValue("@Contact", SqlDbType.NChar).Value = txtContact.Text.Trim();  
                    cmd.Parameters.AddWithValue("@EmailId", SqlDbType.NVarChar).Value = txtEmail.Text.Trim();  
      
                    con.Open();  
                    cmd.ExecuteNonQuery();  
                    con.Close();  
      
                    Clear();  
                    Response.Write("<script type=\"text/javascript\">alert('Record Inserted Successfully');</script>");  
                    GetEmpDataList();  
                }  
            }  
        }  
        void Clear()  
        {  
            txtName.Text = String.Empty;  
            txtContact.Text = String.Empty;  
            txtEmail.Text = String.Empty;  
        }  
        private void GetEmpDataList()  
        {  
            using (SqlConnection con = new SqlConnection(connection))  
            {  
                SqlDataAdapter sd = new SqlDataAdapter("sp_FillData", con);  
                sd.SelectCommand.CommandType = CommandType.StoredProcedure;  
                DataTable dt = new DataTable();  
      
                sd.Fill(dt);  
      
                if (dt.Rows.Count > 0)  
                {  
                    DataListEmp.DataSource = dt;  
                    DataListEmp.DataBind();  
                }  
             }  
        }  
        protected void DataListEmp_DeleteCommand(object source, DataListCommandEventArgs e)  
        {  
           int EmpID = Convert.ToInt32(DataListEmp.DataKeys[e.Item.ItemIndex].ToString());  
      
           using (SqlConnection con = new SqlConnection(connection))  
           {  
               using (SqlCommand cmd = new SqlCommand("sp_DeleteEmployeeData", con))  
               {  
                   cmd.CommandType = CommandType.StoredProcedure;  
                   cmd.Parameters.AddWithValue("@EmpID",EmpID);  
      
                   con.Open();  
                   cmd.ExecuteNonQuery();  
                   con.Close();  
      
                   Response.Write("<script type=\"text/javascript\">alert('Record Deleted Successfully');</script>");  
                   GetEmpDataList();  
               }  
           }  
        }  
        protected void DataListEmp_EditCommand(object source, DataListCommandEventArgs e)  
        {  
            DataListEmp.EditItemIndex = e.Item.ItemIndex;  
            GetEmpDataList();  
        }  
        protected void DataListEmp_CancelCommand(object source, DataListCommandEventArgs e)  
        {  
            DataListEmp.EditItemIndex = -1;  
            GetEmpDataList();  
        }  
        protected void DataListEmp_UpdateCommand(object source, DataListCommandEventArgs e)  
        {  
            int EmpID = Convert.ToInt32(DataListEmp.DataKeys[e.Item.ItemIndex].ToString());  
      
            TextBox txtName = (TextBox)e.Item.FindControl("txtName");  
            TextBox txtContact = (TextBox)e.Item.FindControl("txtContact");  
            TextBox txtEmail = (TextBox)e.Item.FindControl("txtEmail");  
      
            using (SqlConnection con = new SqlConnection(connection))  
            {  
                using (SqlCommand cmd = new SqlCommand("sp_UpdateEmployeeData", con))  
                {  
                    cmd.CommandType = CommandType.StoredProcedure;  
      
                    cmd.Parameters.AddWithValue("@EmpID", EmpID);  
                    cmd.Parameters.AddWithValue("@EmpName", SqlDbType.VarChar).Value = txtName.Text.Trim();  
                    cmd.Parameters.AddWithValue("@Contact", SqlDbType.NChar).Value = txtContact.Text.Trim();  
                    cmd.Parameters.AddWithValue("@EmailId", SqlDbType.NVarChar).Value = txtEmail.Text.Trim();  
      
                    con.Open();  
                    cmd.ExecuteNonQuery();  
                    con.Close();  
      
                    Clear();  
                    Response.Write("<script type=\"text/javascript\">alert('Record Update Successfully');</script>");  
                    DataListEmp.EditItemIndex = -1;  
                    GetEmpDataList();  
                }  
            }  
        }  
    }  

Step 5: Browser Side
Now run your new page in the browser. 
I hope you understood how to work with the DataList control Action Command Event and SQL Procedures.
Have a nice day. 

HostForLIFEASP.NET SQL Server 2022 Hosting




European SQL Server 2022 Hosting :: SQL Server Cumulative Transaction for Credit Debit Transaction

clock July 2, 2025 07:39 by author Peter

report on credit and debit transactions in banking.

USE [SqlBank]

CREATE   FUNCTION [dbo].[FNGetTransaction](@CID BIGINT)
RETURNS @Tab_TRansaction TABLE (id BIGINT ,CreditAmt MONEY,DebitAmt MONEY,Tr_Type VARCHAR(250),
TranDate DATETIME ,AC_ID BIGINT ,Balance DECIMAL(18,2),TType VARCHAR(200)
, CustName varchar(max),AC_NO bigint,Address varchar(max),Mobile varchar(max),
Email varchar(max),AC_OpDate datetime,Remarks varchar(max) , IsmailSend int)
AS
BEGIN

DECLARE @TempAC_ID BIGINT;

SET @TempAC_ID = (SELECT TOP 1 A.AC_ID FROM Tbl_Account A join Tbl_Cust
  C ON A.CID=C.CID WHERE c.CID=@CID)

DECLARE @Tbl_Tran Table
(id BIGINT,
CreditAmt MONEY,DebitAmt MONEY,Tr_Type VARCHAR(250),
TranDate DATETIME ,AC_ID BIGINT ,Balance DECIMAL(18,2),TType VARCHAR(200),
 CustName varchar(max),AC_NO bigint,Address varchar(max),Mobile varchar(max),
Email varchar(max),AC_OpDate datetime  ,
Remarks varchar(max)  , IsmailSend int
)


INSERT INTO @Tbl_Tran(id,CreditAmt,DebitAmt,Tr_Type,TranDate,AC_ID,Balance,TType,CustName ,AC_NO ,Address
 ,Mobile,Email ,AC_OpDate,Remarks,IsmailSend)
SELECT TR.TR_ID, CASE WHEN tr.TR_CrDrType ='Cr' THEN tr.TR_Amt ELSE 0 END CreditAmt,
CASE WHEN tr.TR_CrDrType ='Dr' THEN tr.TR_Amt ELSE 0 END DebitAmt ,Tr.TR_Type,tr.TR_Date,Tr.AC_ID ,

 CASE WHEN tr.TR_CrDrType ='Cr' THEN tr.TR_Amt ELSE 0 END - CASE WHEN tr.TR_CrDrType ='Dr'
 THEN tr.TR_Amt ELSE 0 END  Balance,
 Tr.TR_CrDrType  ,C.CName ,Acc.AC_NO ,C.CAddress ,C.CMObile,C.CEmail ,Acc.AC_OpDate ,
 Tr.Remarks , Tr.IsmailSend

FROM Tbl_Transaction Tr with(nolock) join Tbl_Account Acc with(nolock) ON acc.AC_ID=Tr.AC_ID
      join Tbl_Cust C with(nolock) ON C.CID=Acc.CID
WHERE Acc.CID=@CID;

WITH Tbl_CTE_Tran
as
(
SELECT T2.id,T2.CreditAmt,T2.DebitAmt,SUM(T1.CreditAmt-T1.DebitAmt) Balance,
T2.Tr_Type,T2.TranDate,T2.AC_ID
,T2.TType,T2.CustName ,T2.AC_NO ,T2.Address
 ,T2.Mobile,T2.Email ,T2.AC_OpDate,t2.Remarks,t2.IsmailSend FROM @Tbl_Tran T1
join @Tbl_Tran T2 on T1.id<=T2.id WHERE T2.AC_ID=@TempAC_ID
GROUP BY T2.id,T2.CreditAmt,T2.DebitAmt,T2.Tr_Type,T2.TranDate,T2.AC_ID,T2.TType,
T2.CustName ,T2.AC_NO ,T2.Address
 ,T2.Mobile,T2.Email ,T2.AC_OpDate  ,t2.Remarks ,t2.IsmailSend
)

INSERT INTO @Tab_TRansaction (id,CreditAmt,DebitAmt,Tr_Type,TranDate,AC_ID,Balance,TType,CustName ,AC_NO ,Address
 ,Mobile,Email ,AC_OpDate ,Remarks ,IsmailSend
 )
SELECT id,CreditAmt,DebitAmt,Tr_Type,TranDate,AC_ID,Balance,TType  ,CustName ,AC_NO ,Address
 ,Mobile,Email ,AC_OpDate ,Remarks,IsmailSend
FROM Tbl_CTE_Tran  with(nolock)
WHERE AC_ID=@TempAC_ID

RETURN
END


STEP 2 : Create Procedure & Call above funtion in Procedure

USE [SqlBank]

CREATE PROC [dbo].[PROC_TRansaction]
(
@TR_ID int=null output,
@CID bigint=null,
@TR_Amt decimal(18,2)=null,
@AC_ID bigint =null,
@Flag varchar(100)=null,
@AC_No bigint=null,
@Remarks varchar(max)=null,
@MTR_ID int=null output,
@Balance decimal(18,2)=null output
)
AS
BEGIN

DECLARE @TempTRAmount decimal(18,2)
DECLARE @Temp_ACID bigint

DECLARE @Tran_ScopID bigint;
DECLARE @Tran_ID bigint;
DECLARE @MMTR_ID bigint;

BEGIN TRAN Tbl_Transaction_Tran
   BEGIN  TRY


      IF(@Flag = 'Tran')
      BEGIN

IF EXISTS(SELECT 1 FROM Tbl_Transaction Tr with(nolock) join Tbl_Account Acc
with(nolock) ON acc.AC_ID=Tr.AC_ID WHERE Acc.CID=@CID)
   BEGIN


 SELECT  a.id id ,a.DebitAmt,a.CreditAmt,a.Balance
 ,a.Tr_Type, isnull(Format(a.TranDate,'dd-MMM-yyyy HH:mm'),'') TranDate, NCHAR(8377) Rupees ,a.TType,a.Remarks
   FROM dbo.FNGetTransaction(@CID) a
--      JOIN dbo.FNGetTransaction(@CID) b ON b.id<=a.id

--GROUP BY a.id,a.DebitAmt,a.CreditAmt,a.Tr_Type,a.TranDate,a.AC_ID,a.TType


END
ELSE
BEGIN
Select 'No Transaction summary found...?' OpMsg
END
    END

ELSE IF(@Flag = 'IN')
    BEGIN

    SET @Temp_ACID = (SELECT Top 1 A.AC_ID  FROM Tbl_Account A with(nolock)
     Join Tbl_Cust C with(nolock) ON A.CID=C.CID WHERE A.AC_No=@AC_No)

    DECLARE @SenderName varchar(max)
        SET @SenderName = (SELECT Top 1 c.CName  FROM Tbl_Account A with(nolock)
        Join Tbl_Cust C with(nolock) ON A.CID=C.CID WHERE c.CID=@CID)

    DECLARE @ReciverName varchar(max)
       SET @ReciverName = (SELECT Top 1 c.CName FROM Tbl_Account A with(nolock)
        Join Tbl_Cust C with(nolock) ON A.CID=C.CID
       WHERE A.AC_No=@AC_No)


SET @TempTRAmount = (
 SELECT TOP 1 ISNULL(SUM(b.balance),0) Balance
   FROM dbo.FNGetTransaction(@CID) a
JOIN dbo.FNGetTransaction(@CID) b ON b.id<=a.id

GROUP BY a.id,a.DebitAmt,a.CreditAmt,a.Tr_Type,a.TranDate,a.AC_ID,a.TType ORDER BY a.id desc)

if(@TR_Amt > @TempTRAmount)
BEGIN

Select 'Insuffitient Balance' as msg

END
ELSE
  BEGIN
  Declare @FixScratchAmt decimal(18,2)=500;

  --if not exists (select 1 from Tbl_Transaction Where TR_Date=CURRENT_TIMESTAMP and Ref_TranACC=@AC_ID)
  --begin
  Insert INTO Tbl_Transaction (TR_Type,TR_Amt,TR_Date,AC_ID,TR_CrDrType,Ref_TranACC,isdelete,IsTranType,IsMailSend,Remarks)
  Values                ('Online - Transfer To - '+ @ReciverName + ' '+Cast(@Ac_NO as varchar(max))+' ',
  ISNULL(@TR_Amt,0),CURRENT_TIMESTAMP,@AC_ID,'Dr','Tran-' +CAST(@AC_ID as varchar(max)),0,'S',0,@Remarks)

  set @Tran_ID = @@IDENTITY;
  set @TR_ID= @Tran_ID;
  set @Tran_ScopID= SCOPE_IDENTITY();
  Set @Balance = (SELECT TOP 1 BALANCE FROM dbo.FNGetTransaction(@CID) order by id desc)


  if(@TR_Amt >= @FixScratchAmt)
  begin
   Insert INTO Tbl_Transaction (TR_Type,TR_Amt,TR_Date,AC_ID,TR_CrDrType,Ref_TranACC,isdelete,IsTranType,IsMailSend,Remarks)
  Values                ('Cash Back From S Bank7 ',10,CURRENT_TIMESTAMP,@AC_ID,'Cr',0,1,'R',0,'Cash back from Sbank7. Pay & win more cash back ')
  END

Insert INTO Tbl_Transaction (TR_Type,TR_Amt,TR_Date,AC_ID,TR_CrDrType,Ref_TranACC,isdelete,IsTranType,IsMailSend,Remarks)
  Values                ('Recived From ' + @SenderName + ' Tran - '+Cast(@Tran_ScopID as varchar(max))+'-'+
  CAST(@AC_ID as varchar(max)),ISNULL(@TR_Amt,0),CURRENT_TIMESTAMP,@Temp_ACID,'Cr','Tran-'
  +Cast(@Tran_ScopID as varchar(max))+'-'+ CAST(@AC_ID as varchar(max)),0,'R',0,@Remarks)
  set @MMTR_ID = @@IDENTITY;
  set @MTR_ID = @MMTR_ID;


 -- Select TOP 1 c.CEmail ReciverEmail,c.CName From Tbl_Cust c join Tbl_Account a on a.CID=c.CID where a.AC_No=@AC_No

 -- SELECT top 1 A.AC_No,t.TR_Amt,t.TR_CrDrType,@SenderName SenderName,'http://saipathrikar.blogspot.com/' Url ,

 --   (
 --  SELECT top 1 fna.Balance

 --  FROM dbo.FNGetTransaction(@CID) fna

    --order by fna.id desc
 -- ) Bal , @ReciverName ReciverName


  -- FROM Tbl_Transaction T join Tbl_Account A on
  -- T.AC_ID= A.AC_ID Where A.CID=@CID
  --order by t.TR_ID desc

 -- SELECT top 1 A.AC_No,t.TR_Amt,t.TR_CrDrType,t.Ref_TranACC +'  ' +
 --  @SenderName SenderName,'http://saipathrikar.blogspot.com/' Url  ,

 --  (
 --  SELECT top 1 fna.Balance

 --  FROM Tbl_Account a cross apply dbo.FNGetTransaction(a.cid) fna
 --   where a.ac_ID=fna.AC_ID and a.AC_No=@AC_No
    --order by fna.id desc
 -- ) Bal


 --  FROM Tbl_Transaction T join Tbl_Account A on
 --  T.AC_ID= A.AC_ID Where A.AC_No=@AC_No
 -- order by t.TR_ID desc

 -- end

 -- This Sectioin is for pdf data

 --SELECT  a.id Tran_id ,a.Tr_Type Type,a.TranDate Date,a.CreditAmt,a.DebitAmt,a.TType CrDr,a.Balance

 --  FROM dbo.FNGetTransaction(@CID) a


 --  SELECT  a.id Tran_id ,a.Tr_Type Type,a.TranDate Date,a.CreditAmt,a.DebitAmt,a.TType CrDr,a.Balance
 --  FROM Tbl_Account ac cross apply dbo.FNGetTransaction(ac.cid) a
 --   where ac.ac_ID=a.AC_ID and ac.AC_ID=@Temp_ACID


    END

    END

IF(@@TRANCOUNT > 0)
  BEGIN
  COmmit tran Tbl_Transaction_Tran
  END

END TRY
BEGIN CATCH

IF(@@TRANCOUNT > 0)
            BEGIN
            ROLLBACK TRAN Tbl_Transaction_Tran
            END

            DECLARE @USERID varchar(max),@ERRORLINE varchar(max)
            ,@ERRORMESSAGE varchar(max),@ERRORPROCEDURE varchar(500),@ERRORSEVERITY varchar(max)
            ,@ERRORSTATE varchar(max), @ErroFrm varchar(max)

            SELECT @USERID = SUSER_SNAME(),@ERRORLINE=ERROR_LINE(),@ERRORMESSAGE=ERROR_MESSAGE(),
                   @ERRORPROCEDURE=ERROR_PROCEDURE(),@ERRORSEVERITY=ERROR_SEVERITY(),
                  @ERRORSTATE= ERROR_STATE() ,@ErroFrm = 'Backend'

        EXEC Proc_ERRORLOG @USERID,@ERRORLINE,@ERRORMESSAGE,@ERRORPROCEDURE,@ERRORSEVERITY,@ERRORSTATE,0,@ErroFrm

END CATCH

END

SQL

Step 1.  Create Table valued function for calculate credit,debit amount with total balance

STEP 2. Execute Procedure to Check Report.

exec [dbo].[PROC_TRansaction]

@CID =2,@Flag='Tran'


STEP 3. Output

Thank you for reading this article.

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: Comparing SQL Server and Oracle Query Types by Category

clock June 24, 2025 08:08 by author Peter

This comprehensive reference manual compares and classifies the most used SQL queries in Oracle and SQL Server, arranged by:

  • DDL: Data Definition Language
  • DML: Data Manipulation Language
  • TCL: Transaction Control Language
  • DCL: Data Control Language
  • DQL: Data Query Language

1. DDL – Data Definition Language
Used for defining or modifying database objects like tables and columns.
Oracle Syntax Oracle Syntax

Operation SQL Server Syntax Oracle Syntax
Create Table CREATE TABLE table_name (col1 INT, col2 VARCHAR(100)); Same
Drop Table DROP TABLE table_name; Same
Add Column ALTER TABLE table_name ADD col_name datatype; Same
Drop Column ALTER TABLE table_name DROP COLUMN col_name; Same
Modify Column ALTER TABLE table_name ALTER COLUMN col_name datatype; ALTER TABLE table_name MODIFY (col_name datatype);
Rename Column EXEC sp_RENAME 'table.old', 'new', 'COLUMN'; ALTER TABLE table_name RENAME COLUMN old TO new;
Rename Table EXEC sp_RENAME 'old_table', 'new_table'; RENAME old_table TO new_table;

2. DML – Data Manipulation Language
Used for managing the data within tables.

Oracle Syntax Oracle Syntax

Operation SQL Server Syntax Oracle Syntax
Insert INSERT INTO table_name VALUES (...); Same
Update UPDATE table_name SET col = val WHERE ...; Same
Delete DELETE FROM table_name WHERE ...; Same
Merge (Upsert) MERGE INTO target USING source ON (condition) Same
Truncate TRUNCATE TABLE table_name; Same

3. TCL – Transaction Control Language
Used to manage changes made by DML operations.

Operation SQL Server Syntax Oracle Syntax
Begin Transaction BEGIN TRAN; BEGIN; or auto (depending)
Commit COMMIT; COMMIT;
Rollback ROLLBACK; ROLLBACK;
Savepoint SAVE TRAN save_name; SAVEPOINT sp_name;
Rollback to Savepoint ROLLBACK TRAN save_name; ROLLBACK TO sp_name;

4. DCL – Data Control Language
Used to control access and privileges on database objects.

Operation SQL Server Syntax Oracle Syntax
Grant Privileges GRANT SELECT ON table TO user; Same
Revoke Privileges REVOKE SELECT ON table FROM user; Same
Create User CREATE LOGIN user WITH PASSWORD = 'pwd';
CREATE USER user FOR LOGIN user;
CREATE USER user IDENTIFIED BY pwd;
Assign Role ALTER ROLE role_name ADD MEMBER user; GRANT role_name TO user;

5. DQL – Data Query Language
Used to query data from tables.

Operation SQL Server Syntax Oracle Syntax
Select All SELECT * FROM table_name; Same
Select with WHERE SELECT * FROM table_name WHERE condition; Same
Order By SELECT * FROM table ORDER BY col DESC; Same
LIKE Clause SELECT * FROM table WHERE col LIKE 'A%'; Same
Top Rows SELECT TOP 5 * FROM table; SELECT * FROM table WHERE ROWNUM <= 5;
Aggregate Functions SELECT COUNT(*), AVG(col) FROM table; Same
Group By SELECT col, COUNT(*) FROM table GROUP BY col; Same

Bonus Query Comparisons

Purpose SQL Server Example Oracle Example
Backup Table SELECT * INTO backup FROM original; CREATE TABLE backup AS SELECT * FROM original;
Insert from SELECT INSERT INTO t2 SELECT * FROM t1; Same
Check Procedure Exists SELECT * FROM sys.objects WHERE name='proc'; SELECT * FROM user_objects WHERE object_name='PROC';
Hex to Text Conversion SELECT CONVERT(VARCHAR(50), 0x48656C6C6F); UTL_RAW.CAST_TO_VARCHAR2('48656C6C6F');
Execute Procedure (XML) EXEC ProcName 'XML'; BEGIN ProcName('XML'); END;

Summary Table – SQL Query Categories

Category Full Form Common Operations
DDL Data Definition Language CREATE, ALTER, DROP
DML Data Manipulation Language INSERT, UPDATE, DELETE, MERGE
TCL Transaction Control Language COMMIT, ROLLBACK, SAVEPOINT
DCL Data Control Language GRANT, REVOKE, CREATE USER, ROLES
DQL Data Query Language SELECT, WHERE, ORDER BY, GROUP BY

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: How to Install and Configure SQL Server 2025 in Windows?

clock June 13, 2025 10:36 by author Peter

Microsoft created SQL Server 2025, a potent relational database management system (RDBMS) ideal for database applications at the enterprise level. If you properly follow the instructions, installing SQL Server 2025 on a Windows computer is simple.

System Requirements
Make sure your system meets the following requirements before proceeding:

  • Operating System: Windows 10, Windows Server 2019, or higher
  • Processor: Minimum 2 GHz; recommended multi-core processor
  • RAM: Minimum 4 GB; recommended 8 GB or higher
  • Disk Space: At least 10 GB of free space
  • .NET Framework: Version 4.8 or higher

Step-by-Step Installation
Step 1: Download SQL Server 2025

Visit the official Microsoft website and navigate to the SQL Server 2025 download page. Choose the appropriate version (Standard, Enterprise, or Developer) based on your needs, and download the installation package. Or click here to download SQL Server 2025, then fill out the form below to download the SQL Server 2025 public preview version.

Then the SQL Server Media file will download as shown in the snapshot below.

Step 2: Launch the Installer
Once the download is complete:

  • Locate the installer file in the download folder (file name = SQL2025-SSEI-Eval.exe).
  • Right-click on the file and select Run as Administrator to ensure proper installation. 

Step 3: Choose Installation Type
The installation wizard will prompt you to select the type of installation:

  • Basic Installation: Recommended for beginners or small-scale projects.
  • Custom Installation: Allows you to choose specific features and settings.
  • Here I am selecting Basic for now, as shown in the snapshot below.

For most users, selecting Custom Installation provides the flexibility needed to tailor the installation to their environment. Then select Agree to accept the Microsoft SQL Server License Terms.

Then choose the installation location. For now, I am keeping the default installation path, which is C:\Program Files\Microsoft SQL Server, and click on the Install button. Then, the download and install package will begin as shown in the snapshot below. It may take a few minutes, depending on your internet speed. Once the download finishes, installation will start automatically. Just wait for a few minutes.

Step 4: Configure Instance
During installation:

  • Choose between a Default Instance or a Named Instance.
  • A Default Instance is typically named MSSQLSERVER and works for general purposes.
  • Named Instances are useful for running multiple SQL Server versions on the same machine.

If SSMS is already installed, then click on connect now, or click on the close button, then connect in SSMS, or if SSMS is not installed already, then click on install SSMS, or follow my other article to learn about What is SQL Server Management Studio (SSMS) and How to install SSMS in Windows. Or if you are installing SQL Server for the first time on your machine, then follow the steps mentioned below.

Step 5: Set Up Server Configuration
The wizard will ask for server configuration details:

  • Specify the Authentication Mode:
  • Windows Authentication: Recommended for integration with Windows accounts.
  • Mixed Mode: Allows both Windows and SQL Server authentication.

Provide a strong password for the system administrator (SA) account if using Mixed Mode.

Step 6: Select Features
Choose the features you wish to install:

  • Database Engine Services: For managing databases.
  • Analysis Services: For data analytics.
  • Reporting Services: For generating reports.
  • Integration Services: For ETL processes.

Make sure to only select the features that are relevant to your project to save system resources, and then click on the next button as shown in the snapshot below.

Step 7: Installation Progress
Once all configurations are set, the installer will begin installing SQL Server 2025. This process may take several minutes. Monitor the progress bar and ensure the installation completes without errors.

Step 8: Verify Installation
After installation:

Open SQL Server Management Studio (SSMS) or a similar tool.
Connect to the newly installed SQL Server instance using your credentials.
Run a simple query to test database functionality.

Conclusion
Following these steps will help you install SQL Server 2025 successfully on a Windows system. Make sure to keep software and drivers up to date for optimal performance and security. If you require any clarification/suggestions on the article, please leave your questions and thoughts in the comment section below. Follow C# Corner to learn more new and amazing things about SQL or to explore more technologies. Thanks for reading, and I hope you like it.

HostForLIFEASP.NET SQL Server 2022 Hosting



European SQL Server 2022 Hosting :: In SQL, Horizontal vs Vertical Partitioning

clock June 5, 2025 09:49 by author Peter

It gets harder to maintain performance and scalability as databases get bigger and more complex. In order to optimize query performance and facilitate effective data organization, partitioning is a commonly used technique that divides data into digestible chunks. Developers and DBAs sometimes struggle to decide between vertical and horizontal partitioning when creating partitioning techniques. Depending on the use situation, each strategy has specific benefits and drawbacks. This post will discuss the distinctions between vertical and horizontal partitioning, look at actual use cases, and emphasize how each technique affects performance. You will have the information necessary to select the best partitioning strategy for your SQL database by the end of this session.

What is Partitioning in SQL?
Partitioning is the act of dividing a large database table into smaller, more manageable pieces called "partitions." These partitions allow the database engine to perform operations on a subset of data rather than the entire table, leading to improved performance and scalability.

Partitioning is often used in,

  • OLTP (Online Transaction Processing) systems to handle high transactional workloads.
  • OLAP (Online Analytical Processing) systems to optimize complex aggregations and queries on large datasets.

Partitioning can be implemented as either horizontal or vertical, each catering to specific requirements.

Horizontal Partitioning
Definition
Horizontal partitioning refers to splitting rows of a table across multiple partitions. Each partition contains a subset of records based on a defined range, list, or hash function. All columns from the original table remain in each partition, but the rows are distributed across partitions.

How does it work?

The key criterion for horizontal partitioning is a partitioning key—a column used to determine which partition a row should belong to. For example,

  • A range partition might store rows based on values such as year.
  • A list partition could separate rows based on region or department.

Example
Let’s consider a table SalesData with 10 million rows, containing sales records grouped by years. If we partition the table horizontally based on a RANGE of years, the schema could look like.
    Partition 1: Records for the year 2020
    Partition 2: Records for the year 2021
    Partition 3: Records for the year 2022


-- Create Partition Function
CREATE PARTITION FUNCTION SalesPartitionFunction(INT)
AS RANGE LEFT FOR VALUES (2020, 2021, 2022);

-- Create Partition Scheme
CREATE PARTITION SCHEME SalesPartitionScheme
AS PARTITION SalesPartitionFunction
ALL TO ([PRIMARY]);

-- Create a table using the above partition scheme
CREATE TABLE SalesData (
    SaleID INT,
    Year INT,
    Amount DECIMAL(10, 2),
    PRIMARY KEY (Year, SaleID) -- Include partitioning column in the PRIMARY KEY
) ON SalesPartitionScheme (Year);

Use Cases for Horizontal Partitioning
Time-Based Data: Partitioning logs, transactions, or sales data grouped by date or year allows efficient range queries.

  • Example Query: Retrieve sales data only for 2022.
  • Geographically Distributed Data: Partition by region or country for organizations with distributed offices.
  • Example Query: Retrieve records only for the Europe region.
  • Scalability in Distributed Systems: Horizontal partitioning can distribute partitions across multiple servers (a process often referred to as sharding) to facilitate scaling.

Performance Impact
Query Optimization: Query performance improves since SQL Server can scan only relevant partitions instead of the full table. For example, sql
SELECT * FROM SalesData WHERE Year = 2022;

  • The database engine will scan just the 2022 partition.
  • Reduced I/O: By accessing smaller subsets of rows, the number of disk reads and memory usage is significantly reduced.
  • Parallel Processing: Partitioned data enables better parallelism, where queries can operate on multiple partitions simultaneously.

Vertical Partitioning
Definition

Vertical partitioning splits a table’s columns into multiple tables or entities. Each partition contains a subset of the columns from the original table, often leaving shared columns (like primary keys) in both partitions for reference.

How does it work?

Vertical partitioning is useful for scenarios where certain sets of columns in a table are accessed frequently, while others are rarely used. In this strategy, the table is divided into smaller tables to isolate rarely accessed columns.

Example
Consider a customer management table, CustomerData, containing demographic information (e.g., name, address) as well as financial records (e.g., account balance, credit score). If only the demographic data is frequently queried, we can split the table vertically like this.

Original Table
CREATE TABLE CustomerData (
    CustomerID INT PRIMARY KEY,
    Name VARCHAR(100),
    Address VARCHAR(200),
    CreditScore INT,
    Balance DECIMAL(10, 2)
);


After Vertical Partitioning
CREATE TABLE CustomerDemographics (
    CustomerID INT PRIMARY KEY,
    Name VARCHAR(100),
    Address VARCHAR(200)
);

CREATE TABLE CustomerFinancials (
    CustomerID INT PRIMARY KEY,
    CreditScore INT,
    Balance DECIMAL(10, 2)
);


Use Cases for Vertical Partitioning

  • Rarely Accessed Columns: Columns that are used occasionally or only by specific applications can be moved to a separate table to minimize memory overhead when querying frequently accessed columns.
  • Example: CreditScore and Balance are rarely accessed; partition them into CustomerFinancials.
  • OLTP Optimization: In transactional systems, tables accessed frequently by concurrent users may benefit from reduced memory footprint and decreased cache contention by splitting into smaller entities.
  • Security or Compliance: Sensitive data (e.g., financial records or personally identifiable information) can be stored in separate tables, simplifying access control

Performance Impact

  • Improved Query Performance: Queries on smaller tables (e.g., CustomerDemographics) are faster because they involve fewer data pages, particularly for commonly accessed columns.
  • Reduced Lock Contention: Splitting tables can minimize contention and improve concurrency, as fewer rows will be affected during updates.
  • Reduced Memory Usage: Less frequently accessed columns will not consume buffer memory, reducing the overall database’s memory footprint.

Key Differences: Horizontal vs Vertical Partitioning

Aspect Horizontal Partitioning Vertical Partitioning
Definition Break rows into partitions based on a range, list, or hash. Split columns into separate partitions/tables.
Target Data Units Rows Columns
Use Cases Time-based data, geographically distributed data, and sharding. Isolating rarely used columns, OLTP optimization, and security.
Performance Goal Minimize disk I/O during row scans. Minimize memory usage and lock contention.
Complexity Can involve partition schemes and functions. Relational integrity between partitioned tables requires additional joins.

Choosing the Right Partitioning Strategy

  • When deciding between horizontal and vertical partitioning, consider the following.
  • Large Tables with Historical Data: Use horizontal partitioning for scenarios like transaction logs or sales records grouped by date. This reduces I/O operations and improves range query optimization.
  • Frequently Accessed vs Rarely Accessed Columns: Use vertical partitioning when a table has columns accessed at vastly different frequencies. Split the table to isolate data that can be queried independently.
  • Distributed Database Design: Use horizontal partitioning for sharding across distributed environments, assigning partitions to specific servers to scale out.
  • Security Compliance: Use vertical partitioning to segregate sensitive columns into their own tables, simplifying access control policies.

Scalability and Maintenance Considerations

  • Horizontal Partitioning: Offers better scalability for large datasets but requires careful design to ensure balanced distribution. Maintenance tasks like partition splits or merges generally involve significant I/O overhead.
  • Vertical Partitioning: While easier to maintain, frequent joins between partitions can hurt performance, especially in read-heavy workloads where all data is accessed simultaneously.

Real-World Example
Horizontal Partitioning in a Time-Series Database
An e-commerce website records page views grouped by day. Queries often analyze activity trends for specific days or weeks.
Solution: Partition the PageViews table horizontally by Date.

CREATE PARTITION FUNCTION PageViewsPartition(Date)
AS RANGE LEFT FOR VALUES ('2023-01-01', '2023-02-01', '2023-03-01');


Vertical Partitioning in an OLTP Database
A hospital system stores patient demographics and medical records in a single large table. Medical data is rarely accessed except during reports.
Solution: Create PatientDemographic and MedicalRecords tables to isolate frequently accessed columns.
CREATE TABLE PatientDemographics (
    PatientID INT PRIMARY KEY,
    Name VARCHAR(100),
    Address VARCHAR(200)
);

CREATE TABLE MedicalRecords (
    PatientID INT PRIMARY KEY,
    Diagnosis VARCHAR(100),
    TreatmentDetails TEXT
);

Conclusion
Partitioning is a powerful tool for performance optimization in SQL databases, whether you need to scale horizontally across rows or simplify access via vertical column segmentation. Understanding the difference between horizontal and vertical partitioning, their use cases, and performance implications empowers DBAs and developers to design efficient, scalable, and maintainable databases. By analyzing your system’s query patterns, data distribution, and storage needs, you can make informed decisions on the best partitioning strategy to implement. Carefully choose the approach that aligns with your workload to ensure seamless database performance and growth!

HostForLIFEASP.NET SQL Server 2022 Hosting

 



European SQL Server 2022 Hosting :: Notable Enhancements in Tempdb Performance in SQL Server 2022

clock May 27, 2025 07:59 by author Peter

The most recent iteration of Microsoft's well-liked relational database management system, SQL Server 2022, significantly boosts Tempdb's performance. During a variety of processes, including sorting, joining, and indexing, data is temporarily stored in the Tempdb, a unique database in SQL Server. Temporary objects like temporary tables and table variables are likewise stored in the Tempdb. We'll examine the significant enhancements to Tempdb performance in SQL Server 2022 in more detail in this post.

Prior to delving into the specifics of the enhancements, it is crucial to comprehend the significance of Tempdb performance for SQL Server. SQL Server relies heavily on Tempdb to carry out a number of tasks, and any Tempdb performance problems can have a big effect on SQL Server's overall performance. This is particularly true for systems with high transaction volumes, where Tempdb may become a performance snag. The efficiency of Tempdb-related activities should significantly increase for enterprises thanks to SQL Server 2022's enhancements.

So, what are the improvements in Tempdb performance in SQL Server 2022? The improvements are mainly related to the following areas:

  • Reducing contention for the Tempdb data file
  • Reducing contention for the Tempdb metadata
  • Improved scalability of Tempdb

Reducing Contention for the Tempdb Data File
One of the most significant improvements in SQL Server 2022 is the reduction in contention for the Tempdb data file. Reducing contention for the Tempdb data file is an important aspect of optimizing the performance of the SQL Server. In earlier versions of SQL Server, contention for the Tempdb data file was a significant bottleneck, especially in high-transaction systems. SQL Server 2022 has introduced a new algorithm that reduces contention for the Tempdb data file by distributing page allocations across multiple Uniform Resource Identifiers (URIs). Contention for the Tempdb data file can occur when multiple sessions or transactions are trying to access the same data page simultaneously, leading to contention and slowing down overall database performance. To reduce contention for the Tempdb data file, here are some strategies that can be used:

Increase the number of Tempdb data files
By default, SQL Server creates only one Tempdb data file, but you can increase the number of data files to match the number of processor cores available in the system. This can help to distribute the workload across multiple files, reducing contention for a single file.

Use trace flag 1118
This trace flag forces SQL Server to allocate uniform extents to Tempdb data files instead of mixed extents. Doing so reduces contention for the PFS (Page Free Space) page and improves performance.

Separate Tempdb from user databases
When Tempdb and user databases share the same disk, there is a higher chance of contention. By moving Tempdb to a separate disk, the disk IO is optimized, and the contention is reduced.

Use SSDs for Tempdb storage
Solid State Drives (SSDs) provide faster access to data, reducing the time taken for I/O operations. This can help to improve the performance of Tempdb, especially during heavy workloads.

Optimize Tempdb usage
Tempdb stores temporary data such as worktables, table variables, and cursors. Optimizing the usage of these objects, such as avoiding the use of temporary tables when not necessary and minimizing the use of cursors, can reduce the workload on Tempdb, leading to improved performance.
Reducing Contention for the Tempdb Metadata

The Tempdb database is a system database that stores temporary user objects, temporary tables, temporary stored procedures, and other temporary data generated during query processing. The metadata of these objects is stored in a special system table called sys.system_internals_allocation_units. As multiple user sessions can access Tempdb simultaneously, there can be contention for the Tempdb metadata, leading to performance issues. Here are a few strategies to reduce contention for the Tempdb metadata in SQL Server:

  • One way to reduce contention for the Tempdb metadata is to reduce the number of user connections. You can limit the number of users who can connect to the SQL Server instance or restrict access to specific applications or users.
  • By default, SQL Server creates a single data file for Tempdb, which can lead to contention for the Tempdb metadata. To reduce this contention, you can configure multiple Tempdb data files, each stored on a separate physical disk. This allows multiple threads to access Tempdb simultaneously, reducing contention for the Tempdb metadata.
  • Another way to reduce contention for the Tempdb metadata is to move the Tempdb database to a dedicated disk. By doing this, you can reduce the amount of disk I/O generated by other databases, improving the performance of Tempdb and reducing contention for the Tempdb metadata.
  • Global temporary tables are stored in Tempdb and can lead to contention for the Tempdb metadata, especially if they are accessed by multiple user sessions simultaneously. To reduce contention for the Tempdb metadata, you can reduce the use of global temporary tables or replace them with local temporary tables stored in the user database.
  • It is important to monitor the Tempdb metadata contention regularly. You can use SQL Server Profiler or System Monitor to monitor the Tempdb metadata contention and identify performance issues. By doing this, you can take proactive measures to reduce contention for the Tempdb metadata and improve the performance of your SQL Server instance.

Improved Scalability of Tempdb
The scalability of Tempdb is an important consideration for managing large databases and improving the performance of SQL Server. Few ways to improve the scalability of Tempdb in SQL Server:

  • By default, Tempdb has only one data file and one log file. This can cause contention and performance issues as the database grows. To improve scalability, you can split Tempdb into multiple data files, one for each CPU core or up to 8 cores per file for OLTP workloads. This allows SQL Server to spread the load across multiple files and reduce contention.
  • Tempdb should be set to automatically grow as needed to avoid running out of space. However, the default settings may not be optimal for your workload. Configure the auto growth settings for Tempdb based on your database's usage and expected growth rate. You should also preallocate space for Tempdb to avoid fragmentation and disk space issues.
  • Monitoring Tempdb usage is critical to identifying performance issues and tuning the database. Use SQL Server's built-in tools like DMVs and performance counters to monitor Tempdb usage, including page allocation, contention, and IO operations. This can help you identify bottlenecks and adjust the database configuration to improve performance.
  • Separating the data and log files for Tempdb can improve performance by allowing them to be stored on different disks or storage systems. This can reduce contention and improve IO performance.
  • Using solid-state drives (SSDs) for Tempdb can significantly improve performance by reducing IO latency and increasing throughput. SSDs are faster than traditional hard disk drives and can enhance the scalability of Tempdb.

Optimizing the database configuration, monitoring usage, and using appropriate hardware can improve the performance and scalability of SQL Server for your workload.

Conclusion
SQL Server 2022's Tempdb enhancements are important for improving SQL Server's overall performance. Improving Tempdb's scalability, lowering contention for the Tempdb data file, and lowering contention for the Tempdb metadata are the primary areas for improvement. A new algorithm that divides page allocations among several URIs reduces contention for the Tempdb data file. This algorithm can be further optimized by increasing the number of Tempdb data files, utilizing trace flag 1118, separating Tempdb from user databases, utilizing SSDs for Tempdb storage, and optimizing Tempdb usage.
To reduce contention for the Tempdb metadata, users can reduce the number of user connections, configure multiple Tempdb data files, move Tempdb to a dedicated disk, reduce the use of global temporary tables, and monitor the Tempdb metadata contention regularly. Improved scalability of Tempdb can be achieved by enabling the indirect checkpoint feature, configuring Instant File Initialization, using smaller Tempdb files, and enabling Tempdb snapshot isolation. These improvements to Tempdb performance in SQL Server 2022 will significantly enhance the performance of high-transaction systems and provide faster processing of temporary data.

HostForLIFEASP.NET SQL Server 2022 Hosting



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in