European Windows 2019 Hosting BLOG

BLOG about Windows 2019 Hosting and SQL 2019 Hosting - Dedicated to European Windows Hosting Customer

SQL Server Hosting - HostForLIFEASP.NET :: Change Data Capture (CDC) In SQL Server

clock March 23, 2021 07:26 by author Peter

Every developer who has worked with SQL SERVER sooner or later has come across this problem, where he or she has to take a copy of the row/s before performing any DML operations, and the table in which it is copied is generally marked as ‘tablename_history’ or ‘tablename_backup’ and this is achieved by writing an insert query in a stored procedure or trigger whichever found appropriate.
 
Recently I stumbled upon a system function in the SQL SERVER called Change Data Capture (CDC in short), which does the above function(if enabled) asynchronously by default and is supported by all versions higher than SQL Server 2008.
 
Enabling Change Data Capture
To implement CDC we first need to enable CDC on a database, this is done by executing the stored procedure "sys.sp_cdc_enable_db" as given below.
    -- To Enable CDC  
    USE [CDC_TEST]  
    GO  
    EXEC sys.sp_cdc_enable_db  
    GO  


Now to enable CDC on the table, we need to do the stored procedure "sys.sp_cdc_enable_table" with its input parameters as given below
    USE [CDC_TEST]  
    EXEC sys.sp_cdc_enable_table     
      @source_schema = 'dbo', -- Is the name of the schema to which the source table belongs.  
      @source_name = 'Customer', -- Is the name of the source table on which to enable change data capture    
      @role_name     = NULL -- Is the name of the database role used to gate access to change data, we can mention null if we want all the users having access to the database to view the CDC data  


Once the stored procedure executes successfully some table with schema "cdc" is generated under the System Tables folder.


The tables include the following
    cdc.captured_columns table that contains the list of captured columns
    cdc.change_tables table that contains the list of tables that are enabled for capture
    cdc.ddl_history table that records the history of all the DDL changes since capture data enabled
    cdc.index_columns table that contains all the indexes that are associated with change table
    cdc.lsn_time_mapping table that is used to map the LSN number with the time and finally one change table for each CDC enabled table that is used to capture the DML changes on the source table
    cdc.dbo_Customer_CT table that contains the actual data before any DML operation is executed and some additional metadata like the operation, affected columns count, etc. The name of the table may vary depending on the name of the primary table on which the CDC is applied, but in general, it will be "NameOfSchema_TableName_CT" hence the name "dbo_Customer_CT".

With the tables, two SQL Agent Jobs are also created for given below
    cdc.CDC_TEST_capture job is responsible to push the DML changes into change tables
    cdc.CDC_TEST_cleanup job is responsible to clean up the records from the change tables. This job is created automatically by SQL Server to minimize the number of records in the change tables, failing this job execution will be resulting in a larger change table.

Detect Changes
So now that we have implemented CDC on the database and table, let's perform some DML operations given below
    INSERT INTO [dbo].[Customer]  
               ([CustName]  
               ,[CustMobNo]  
               ,[Address]  
               ,[SubAreaId])  
         VALUES  
               ('test cdc'  
               ,'9876543215'  
               ,'Home Address'  
               ,1)  
      
    UPDATE [dbo].[Customer]  
    SET  
        CustName = 'test cdc 2',  
        CustMobNo = '9876543216',   
        [Address] = 'Address updated',  
        SubAreaId = 2   
    WHERE CustId = 1  
      
    DELETE [dbo].[Customer] WHERE CustId = 1  


The results of the executed DML queries are populated in the table [cdc].[dbo_Customer_CT] table as shown in the image below.


The first five columns are metadata to the rows updated. The column '__$operation' is of significance as the column is used to identify the DML operation.
    __$operation = 1 denotes deleted rows
    __$operation = 2 denotes new inserted rows
    __$operation = 3 denotes row before the updation
    __$operation = 4 denotes row after the updation

But quering the cdc table is not advisable by Microsoft, hence we have to use table valued functions that were created while enabling CDC on the table. In this case, we have a table valued function called "fn_cdc_get_all_changes_dbo_Customer" which can be used as given below
    DECLARE @from_lsn binary (10), @to_lsn binary (10)  
    SET @from_lsn = sys.fn_cdc_get_min_lsn('dbo_Customer') -- scheme name with table  
    SET @to_lsn = sys.fn_cdc_get_max_lsn()  
      
    SELECT *  
    FROM cdc.[fn_cdc_get_all_changes_dbo_Customer](@from_lsn, @to_lsn, 'all')  
    ORDER BY __$seqval  


Disable CDC
Once the CDC is enabled we cannot change the Primary Key of the table, truncate the table, and in case we have to add or remove a column the corresponding CD table doesn't get updated and hence won't detect any changes for the newly added column. In these cases, we will have to disable the CDC make appropriate changes and re-enable CDC on the table. Below is a stored procedure that can be used to remove CDC on a table.
    EXEC sys.sp_cdc_disable_table     
      @source_schema = 'dbo' ,     
      @source_name = 'Customer',  
      @capture_instance ='all'  

Note

  • The SQL Agent should be up and running all the time
  • cdc_jobs configurations are very important to set correctly.Overestimating/underestimating the configurations will have a detrimental impact on you application performance. You may need to genuinely configure as per your workload, a performance test can be carried out as per your workload to reach out your optimal values
  • Cleanup job is scheduled by default to run at 02:00 AM every day
  • Capture job is scheduled as “Start automatically when SQL Server Agent starts”. As it uses continuous parameter further, you may not need to make any change for “Schedule type”.

HostForLIFEASP.NET SQL Server Hosting



SQL Server Hosting - HostForLIFE :: Handle JSON Data In SQL

clock March 15, 2021 07:25 by author Peter

In today's development world we are exposed to both SQL and No-SQL database operations. At some point we may need to map a JSON data to SQL table . Here is an article to parse JSON in SQL. Let's start.
 
Suppose front-end /client sends JSON data in string format. Define a variable to store the JSON string as below.
    DECLARE @json nvarchar(max) ;  
    //input from client  
    set @json = N'[{ \"mid\": \"/m/01dvt1\", \"description\": \"Joint\", \"score\": 0.975906968,   
    \"topicality\": 0.975906968 }, { \"mid\": \"/m/0dzf4\", \"description\": \"Arm\", \"score\": 0.9426941, \"topicality\": 0.9426941 },   
    { \"mid\": \"/m/01ssh5\", \"description\": \"Shoulder\", \"score\": 0.936277151, \"topicality\": 0.936277151 },    
    { \"mid\": \"/m/035r7c\", \"description\": \"Leg\", \"score\": 0.925112, \"topicality\": 0.925112 },  
    { \"mid\": \"/m/01d40f\", \"description\": \"Dress\", \"score\": 0.920576453, \"topicality\": 0.920576453 },   
    { \"mid\": \"/m/02p0tk3\", \"description\": \"Human body\", \"score\": 0.8836405, \"topicality\": 0.8836405 },  
    { \"mid\": \"/m/062581\", \"description\": \"Sleeve\", \"score\": 0.8722252, \"topicality\": 0.8722252 },   
    { \"mid\": \"/m/019swr\", \"description\": \"Knee\", \"score\": 0.8650081, \"topicality\": 0.8650081 },  
    { \"mid\": \"/m/01j04m\", \"description\": \"Thigh\", \"score\": 0.858148634, \"topicality\": 0.858148634 },    
    { \"mid\": \"/m/01vm1p\", \"description\": \"Elbow\", \"score\": 0.834722638, \"topicality\": 0.834722638 }]';  


This is a JSON data in string format (equivalent to JSON.stringify() in JavaScript). Before proceeding to map data, first we should generate a valid JSON object out of the string input. We can do that by replacing "/" and "\" with space. Here is the code.
    set @json = REPLACE(@json,'\','');  
    set @json = REPLACE(@json,'/','');  


SQL has in-built method "OPENJSON" to convert a JSON object to row and column format. Let's see the output.
    select * from OPENJSON ( @json ) ;   

Output


Here "type" refers to data type of JSON data. For more info about OPENJSON, here is a link MSDN.
 
Now, we have to parse value column into SQL column. We can do so by using below query.
    select *  FROM    
     OPENJSON ( @json )    
    WITH (  
      mid varchar(10) '$.mid',  
      description varchar(max) '$.description',  
      score nvarchar(20) '$.score',  
      topicality float '$.topicality'  
    )  
    select @desc as Description  


Here $.mid,$.description, $.score and $.topicality are JSON properties. Based on your JSON property name , you need to replace it.
 
Output


We can copy these records to a SQL table as below.

    insert into jsondata (mid,description,score,topicality)   
    select mid,description,score,topicality  
    FROM    
     OPENJSON ( @json )    
    WITH (  
      mid varchar(10) '$.mid',  
      description varchar(max) '$.description',  
      score nvarchar(20) '$.score',  
      topicality float '$.topicality'   
    );  
    select * from jsondata;  

Here, i am trying to insert JSON data to an existing SQL table records.
 
Output

We can map No-SQL data to a SQL data table using SQL predefined method , that is "OPENJSON". I hope this article is helpful for you. Thanks you for spending time to read it. I am always open for any input or suggestion. Thank you!

HostForLIFEASP.NET SQL Server 2019 Hosting



SQL Server 2019 Hosting - HostForLIFEASP.NET :: RAND Function In SQL

clock March 12, 2021 11:17 by author Peter

In this blog, we will learn about how to use RAND() function based on our business requirement.
So here we will cover the following things:
    Definition
    Random Decimal Number
    Random Integer Range
    Real example
    Summary

Definition
As the name suggests the RAND function can be used to return any random number which can be decimal or integer.
 
The syntax for the RAND function would be:
    SELECT RAND()  

This function will return any random number like this image.

 
We can create any random decimal number between two given numbers, so for that, we can use this formula.
    SELECT RAND()*(b-a)+a;  

Here in this formula, you will use b for greater number and a for a lower number, so this formula will return a number between this.
 
Random Integer Range
 
We can create any random integer number between two given numbers, so for that, we can use this formula.
    SELECT FLOOR(RAND()*(b-a+1))+a;  

Here in this formula, you will use b for greater number and a for a lower number, so this formula will return a number between this.
 
Note
This RAND() function we can use on the following SQL version, SQL Server 2017, SQL Server 2016, SQL Server 2014, SQL Server 2012, SQL Server 2008 R2, SQL Server 2008, SQL Server 2005.
 
Real example
 
Now let's see the real example on this function, Here I am using multiple examples so that you can differentiate among them.
 
--//---1- Random Decimal-----//

    SELECT RAND()  
    SELECT RAND(7);  
    SELECT RAND(-7);  
    SELECT RAND()*(7-1)+1;  
    SELECT RAND(8)*(7-1)+1;  
    SELECT RAND(-4)*(7-1)+1;  

--//---2- Random Integer-----//


    SELECT FLOOR(RAND()*(8-4+1))+4;  
    SELECT FLOOR(RAND(6)*(8-5+1))+5;  
    SELECT FLOOR(RAND(123456789)*(10-5+1))+5;  


See this image for result,

HostForLIFEASP.NET SQL Server 2019 Hosting



SQL Server Hosting - HostForLIFEASP.NET :: Deploy SSIS Package To SQL Server

clock March 8, 2021 06:41 by author Peter

Before going next, first make sure you have SQL Server Integration Services installed. Open Visual Studio SSIS package project and right click on project and hit Deploy to deploy all packages, if you want to install individual packages then right click on the package and hit deploy.


First window is introduction windows click Next button.

We have two deployment targets,

    SSIS in SQL Server
    SSIS in Azure Data Factory

As in this article we are going to deploy on SQL Server, so we must select SSIS in SQL Server and click Next.


Select destination, Enter SQL Server name, Authentication type, Username and password and click Connect. Once connect Browse project folder path if available, if not available create a directory in SSISDB and create a new project and hit Next.

You can review all given changes and hit Deploy.

You can check the deployment result in last windows. If all results are passed, then click close.


Above screenshot shows that all results are passed and successfully deployed.


Go to SQL Server and expand Integration Services Catalogs and go to SSISDB the you can see the created folder and project and deployed packages there.

HostForLIFEASP.NET SQL Server Hosting



SQL Server Hosting - HostForLIFE :: Merge Statement In SQL

clock March 1, 2021 06:14 by author Peter

We use merge statement  when we have to merge data from the source table to the target table. Based on the condition specified it will Insert, Update and Delete rows in the targeted table all within the same statement. Merge statement is very useful when we have large data tables to load, especially when specific actions to be taken when rows are matching and when they are not matching.
 
The statement has many practical uses in both online transaction processing (OLTP) scenarios and in data warehousing ones. As an example of an OLTP use case, suppose that you have a table that isn’t updated directly by your application and instead, you get a delta of changes periodically from an external system. You first load the delta of changes into a staging table and then use the staging table as the source for the merge operation into the target.
 
The below diagram shows the source table and target table with corresponding actions: Insert, Delete and Update


It shows three use cases,
    When the source table has some rows matching that do not exist in the target table, then we have to insert these rows to the target table.
    When the target table has some rows that do not exist in the source table, then we have to delete these rows from the target table.
    When the source table has some keys matching keys with the target table, then we need to update the rows in a targeted table with the values coming from the source table.

Below is the basic structure of the Merge statement,
    MERGE INTO <target_table> AS TGT USING <source_table> AS SRC    
    ON <merge_condition>    
    WHEN MATCHED    
        THEN update_statement    -- When we have a key matching row
    WHEN NOT MATCHED    
        THEN insert_statement    -- when row exists in the source table and doesn't exist in the target table
    WHEN NOT MATCHED BY SOURCE    
        THEN DELETE;             -- Row doesn't exist in the source table


Consider the below example,
It is very easy to understand the merging concept here. We have two tables, the source table and a target table. The Source table has a new price for fruits [ex: Orange rate changed from 15.00 to 25.00] and also new fruits arrived at the store. When we merge we are deleting a few rows which do not exist in the source table.

 
Merge Statement In SQL
Code to merge tables.
    MERGE INTO Fruits WITH(SERIALIZABLE) f
    USING source s
    ON (s.id = f.id)
    WHEN MATCHED
    THEN UPDATE SET
    f.name= s.name,
    f.amount = s.amount
    WHEN NOT MATCHED BY TARGET
    THEN INSERT (id, name, amount) VALUES (s.id, s.name, s.amount)
    WHEN NOT MATCHED BY SOURCE
    THEN DELETE;

    SELECT @@ROWCOUNT;
    GO


Important Merge Conflict
Suppose that a certain key K doesn’t yet exist in the target table. Two processes, P1 and P2, run a MERGE statement such as the previous one at the same time with the same source key K. It is normally possible for the MERGE statement issued by P1 to insert a new row with the key K between the points in time when the MERGE statement issued by P2 checks whether the target already has that key and inserts rows. In such a case, the MERGE statement issued by P2 fails due to a primary key violation. To prevent such a failure, use the hint SERIALIZABLE or HOLDLOCK (both have equivalent meanings) against the target as shown in the previous statement. This hint means that the statement uses a serializable isolation level to serialize access to the data, meaning that once you get access to the data, it’s as if you’re the only one interacting with it.
 
In this article, we learned how a Merge statement improves performance by reading and processing data in a single query. There is no need to write three different statements. This will avoid multiple I/O operations from the disk for each of three statements individually because now data is read only once from the source table.

HostForLIFEASP.NET SQL Server 2019 Hosting



SQL Server 2019 Hosting - HostForLIFEASP.NET :: SQL Index Creation Using DROP EXISTING ON

clock February 23, 2021 05:46 by author Peter

When you are making changes to an existing Non-Clustered index SQL Server provides a wide variety of options. One of the more frequently used methods is DROP EXISTING; in this post you will learn all about that option. This option automatically drops an existing index after recreating it, without the index being explicitly dropped. Let us take a moment understand the behavior of this choice.
 
DROP EXSITING=ON
Which is my preferred method, will DROP the current index only after it finishes creating and building the index with the new definition. The pitfall is that if the index does not exist, you will get an error and must create it without the option or set it to OFF. However, the more important benefit of using this one is all about performance. The index will still be used by active queries until it is rebuilt with the new definition.
    CREATE NONCLUSTERED INDEX [dcacIDX_ServiceType] ON [dbo].[Accounts]  
    (  
       [ServiceType] ASC  
    )  
    INCLUDE([AccountId]) WITH (STATISTICS_NORECOMPUTE = OFF, DROP_EXISTING = ON, ONLINE = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]  
    GO  


If index does not exist, you will get a 7999 error.
 
Msg 7999, Level 16, State 9, Line 1
 
Could not find any index named 'dcacIDX_ServiceType' for table 'dbo.Accounts'.
 
There are a few exceptions to keep in mind per docs.microsoft.com.
 
With DROP_EXISTING, you can change,

  • A nonclustered rowstore index to a clustered rowstore index.

With DROP_EXISTING, you cannot change,

  • A clustered rowstore index to a nonclustered rowstore index.
  • A clustered columnstore index to any type of rowstore index.


DROP and CREATE
This option is a cleaner and wont error if the index doesn’t already exist. However, I caution you when using this especially when it is a large table. Using this option drops the index before it creates the new, leaving your system without the previous index definition. This can create a huge performance issue while the system waits for the new index to be created. I know this firsthand, as I did this with a client a few years ago, during the day while trying to fix a performance issue. I created a worse issue while the waiting for the new one to be created. It took 45 mins to create the new index with the new definition which caused CPU to spike to 100% while active queries were trying to come through. Which sadly, in turn, slowed down the new index creation.

    DROP INDEX IF EXISTS [dcacIDX_ServiceType] ON [dbo].[Accounts]  
    GO  
    CREATE NONCLUSTERED INDEX [dcacIDX_ServiceType] ON [dbo].[Accounts]  
    (  
       [ServiceType] ASC  
    )  
    INCLUDE([AccountId] WITH (STATISTICS_NORECOMPUTE = OFF, DROP_EXISTING = OFF, ONLINE = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]  
    GO  

Now I should also note that the DROP_EXISITING method is also faster when you must modify a Clustered index. Every Non-Clustered index refers to the Clustered index using what is called a clustering key, essentially, a pointer to the row in the clustered index. When a clustered index is dropped and re-created, SQL Server must rebuild the Non-Clustered indexes on that table. In fact, it gets done twice by actually rebuilding them upon drop and rebuild again on the create of the Clustered index. Using DROP_EXISTING=ON prevents you from having to rebuild all these indexes as their keys will stay the same, thus making it significantly faster.
 
The reason I took the time to write this quick blog is to remind those to consider using the DROP EXSITING=ON rather than using the DROP and CREATE method when possible. Do not introduce a performance issue when you can avoid it and you can more efficiently make the desire changes you need. Just a friendly reminder.

HostForLIFEASP.NET SQL Server 2019 Hosting

 



SQL Server 2019 Hosting - HostForLIFEASP.NET :: Split Alphabets From Alphanumeric String In SQL Server

clock February 16, 2021 07:32 by author Peter
This article gives an explanation about how to split alphabets from the alphanumeric string in an SQL server. Here I'll also explain how to create a function in an SQL server.
In my previous article, I explained what is an alphanumeric string and how to split numbers from an alphanumeric string in an SQL server that you might like to read.
 
While working with any data-driven applications, sometimes we need to split the numbers and alphabets from the input string as per the given requirement. I got many emails from students and beginner programmers to write an article on ways to get only varchar values from the string in the SQL server. So today in this article I'll explain how to archieve this requirement to split the numbers and alphabets and return only varchar value from the string.
 
Requirement
  1. How to get alphabets from an alphanumeric string in SQL Server?

Implementation
So, let's create a query to split the alphabets from the string in the SQL server.

Get Alphabets from string
Let's split the alphanumeric string and get only alphabets from the string. So, we will take an example for demonstration.
 
I have my enrollment number, which is a combination of numbers and alphabets, and I want only alphabets from my enrollment number.
 
Example
  1. Input (Enrollment Number): SOE14CE13017  
  2. Expected Output: SOECE  
SQL Query to Get Alphabets From String
DECLARE @strEnrollmentNumber NVARCHAR(MAX) = 'SOE14CE13017'  
DECLARE @intNumber INT    
   
SET @intNumber = PATINDEX('%[^A-Za-z]%', @strEnrollmentNumber)    
   
WHILE @intNumber > 0    
  BEGIN  
    SET @strEnrollmentNumber = STUFF(@strEnrollmentNumber, @intNumber, 1, '' )    
    SET @intNumber = PATINDEX('%[^A-Za-z]%', @strEnrollmentNumber )    
END  

Explanation
As you can see in the query above, here, we have declared two different temp variables @strEnrollmentNumber which indicates an Input string, and @intNumber that is taken to check whether the input string contains a number or not. Then using the PATINDEX function of the SQL server we have identified that the string input string contains a number or not and stored the return value of this function into @intNumber.
 
In SQL server PATINDEX is a function that accepts search pattern and expression(input string) as a parameter and returns, starting position of the first occurrence of the pattern in a specified expression(input string), PATINDEX will return 0 if the pattern is not found in the specified expression(input string). Here, we have used a pattern '%[^A-Za-z]%' that indicates only alphabets from a to z and A to Z.
 
Now, by using the while loop in the SQL server we removed the numbers from the input string which not match with the given pattern '%[^A-Za-z]%' one by one using the STUFF function and store the result in the @strEnrollmentNumber variable and again set the value of @intNumber as per the specified pattern '%[^A-Za-z]%' as we used condition @intNumber > 0 in while loop, So it will do the same process again and again and remove numbers from the input string one by one till @intNumber gets 0 and remove all the numbers from the input string.
 
In SQL Server STUFF() function is used to deletes a specified sequence of characters from a source/Input string and then inserts another set of sequence of characters at a specified starting point. I have written an article on STUFF() function with syntax and examples that you might like to read.
 
Use of Query
SELECT @strEnrollmentNumber  

Output
SOECE 

You also can create a function to get only alphabets from the input string to reduce the complexity of the query.
 
Function to Get Alphabets From String
CREATE FUNCTION [dbo].[GetAlphabetsFromString]  
(  
    @strInputString  VARCHAR(MAX)  
)    
RETURNS VARCHAR(MAX)  
AS    
BEGIN    
    DECLARE @intValue INT    
    SET @intValue = PATINDEX('%[^A-Za-z]%', @strInputString)    
BEGIN    
    WHILE @intValue > 0    
    BEGIN    
        SET @strInputString = STUFF(@strInputString, @intValue, 1, '' )    
        SET @intValue = PATINDEX('%[^A-Za-z]%', @strInputString )    
    END    
END    
    RETURN ISNULL(@strInputString,'')    
END    
GO  
Use of Function
SELECT dbo.GetAlphabetsFromString('SOE14CE13017')  

Output
SOECE  

Explanation
As you can see in the created function above here we have created a function that accepts inputString as an argument and all the logic is the same as I explained in the above SQL query to return only alphabets from the string. Finally, this function returns the varchar value from the string and if the input string dose does not contain any alphabets then this function will return an empty string.
 

Summary
In this article, we learned how to split the alphabets from the alphanumeric string in the SQL server as well as about the PATINDEX() and STUFF() function of the SQL server and way create a function in the SQL server that returns a varchar value.

HostForLIFEASP.NET SQL Server 2019 Hosting



SQL Server Hosting - HostForLIFE :: How to Fix SQL Server Master Database Corruption?

clock February 8, 2021 08:46 by author Peter

The master database is the most important database in the SQL Server. The SQL server has no meaning without the master database, and a user is unable to access SQL database without it. It stores all the primary configuration details of the SQL Server. Whenever a user installs the SQL Server, it creates master, MSDB, model, and TEMPDB system database by default. All these system databases, along with the master database, create system tables, which record all the Server parameters and detailed information about every database and user. Moreover, the master database is stored in a physical file known as master.mdf,  and transaction log file corresponding to the master file is named as masterlog.ldf file. This database is stored at a default location, with a small size. However, if SQL Server master database gets corrupted due to some reason, the system database will not start with the user database.

Master database corruption in SQL Server is a common problem faced by users. Therefore, it is always suggested to take backup of the master database on a  regular basis to have permanent access to the SQL Server. If the level of corruption is really high in master.mdf file, then SQL Server will not get started. However, to fix the issue, a user needs to rebuild master.mdf using command prompt. Moreover, if a master database is suffering from a minor level of corruption, a user can start the database but is not allowed to access the details stored in the database.

How to Fix SQL Server Master Database Corruption?
In this segment of the article, different solutions to overcome master database corruption are discussed. Users can choose any of these according to their choice.

Restore Master Database from Backup
In order to restore the master database from backup, a user must have a complete backup of master.mdf file. Moreover, before you begin, please start SQL Server in single user mode. For this, follow the steps given below.

    First of all, open SQL Server Configuration Manager and choose SQL Server Services option.
        After that, a user needs to choose SQL Server instance.
    Now, right-click on it and select Properties.
    In the Properties window, click on Advanced tab to open it.
    Now, go to the Startup Parameters option and add -m; prefix before already existing parameters.
    Then, start the SQL Server in single user mode.

    Now, to restore master.mdf file, follow the given steps.

    Start the SQL Server and open cmd.exe from the Start menu.
    Enter SQLCMD on command prompt.
    Now, to restore the master database, run the following command:

    RESTORE DATABASE master FROM DISK = ‘D:\Backupfolder\master.bak’ WITH REPLACE

    After executing the above command, remove prefix (-m) parameter and start SQL Server.

A user can use this method to fix the SQL Server master database corruption problem without any hassle. But, the only condition is that one must have the backup to use this method.

Rebuild Master Database in SQL Server

To rebuild SQL Server master database, follow the steps mentioned below
First of all, open the command prompt and try to change the directories to the location of setup.exe file on the local server. However, its default location on the server is

C:\Program Files\Microsoft SQL Server\130\Setup Bootstrap\Release

Now, run the following commands in the command prompt setup.
Setup / QUIET / ACTION = REBUILDDATABASE / INSTANCENAME = InstanceName1 / SQLSYSADMINACCOUNTS = accounts[/SAPWD= Strong_Password ] [ /SQLCOLLATION=CollationName]  

As the rebuilding process completes, it returns the command prompt without any message.

To confirm, one can view the summary.txt log file. The default location of the summary.txt log file is:
C:\Program Files\Microsoft SQL Server\130\Setup Bootstrap\Logs

Alternate Solution
Another option that a user can use is third party SQL database recovery tool to have an effortless solution. They help users in removing almost all types of corruption from SQL databases. Moreover, the tool is very easy-to-use as compared to the manual solution.

Conclusion
The master database or master.mdf stores all the available meta data related to the SQL Server, for example, login details, configuration details, information about pointers, file location, and much more. As it is not possible for a user to start the SQL Server with an inconsistent or corrupted master database, there is a need to recover the corrupt master database. Therefore, in this post, we have discussed tricks to resolve master database corruption in SQL Server. A user can use any of these methods depending upon the criteria. However, the above suggested solutions are tried-and-tested before for master database recovery and can be used without any risk.

HostForLIFE.eu SQL Server 2019 Hosting



SQL Server Hosting - HostForLIFE :: How To Fix “Reference To Type ‘SqlConnection’ Claims Defined In ‘System.Data’, But It Could Not Be Found”?

clock January 25, 2021 11:29 by author Peter

In this blog, I will show you how to fix the following error:
“Reference to type ‘SqlConnection’ claims is defined in ‘System.Data’, but it could not be found”.

Are you anxious? Then go to “THE SOLUTION."

SPOILER

It’s a little bit painful but it can be solved.
You’ll be able to use .NET Framework Desktop App or ASP.NET inherit/use SqlConnection from netstandard2.

PROBLEM

If you are getting this message: “Reference to type ‘SqlConnection’ claims it is defined in ‘System.Data’, but it could not be found”; it is because you are trying to use the .NET Standard component directly from the .NET Framework (Desktop or ASP.NET).

Why is that you cannot use the object SqlConnection from .NET Framework hosted inside NetStandard 2.0?

ABOUT MY PROJECT
I do have an application, that is ASP.NET and WinForms, using .NET Framework 4.7.1. The core of the code is in assemblies that are shared between both, using System.Web and System.WebForms.

I enjoy migrating to new technologies to see how far they can go. And in this case, my hope was to use less memory on the server side.

I started using .NET Standard 1.6, but I didn’t feel comfortable to deploy it on production. And when 2.0 was released with the SQL Client, I imagined that it was the time. However, I didn't have time at that moment, so I started to migrate a month ago. Well, it took only two days.

I’ve decided to write this post because there isn't a good solution for my purpose. And, I made this on my own.

ATTACHED PROJECTS
I’ve attached two projects, one with the error and other with the error fixed, so you can analyze them both.

THE SOLUTION


STEPS
Create an intermediate library in .NET Standard 2.0 to open the Database Connection.
All the code to open connections must be in .NET Standard 2.0.

Install all the projects that use System.Data, the NuGet System.Data.SqlClient 4.5.1. or later.

Change all the code from .NET Framework libraries that use System.Data to the .NET Standard 2.0.
In the main program/ASP.NET, start the connection with this.
    using(var oCnn = Configurations.GetConnection())  
    {  
    …  
    }  

Why is this necessary?
If you don’t use “var”, your main code will make a reference to the System.Data on the current .NET Framework project. But with “var”, the compilation will refer to “netstandard2” System.Data.SqlClient at the compiling time.
That's all!

SAMPLE

See attached projects.

CONCLUSION
I always migrate to new technologies and make modern ways to do the same job more secure, better, faster, and modern to the end users. I calculate there will be a long time before the technology gets deprecated,  and I avoid using less than the 3rd version - I made an exception for this case, but only because I knew that the roadmap for .NET Standard 3.0 will be supporting WinForms. So I think it's good to refactor the codes to make it updated.
In these samples, I have used .NET Framework 4.7.1 and C# 7.2 (to use: “in” SqlConnection).

HostForLIFE.eu SQL Server 2019 Hosting

 



SQL Server 2019 Hosting - HostForLIFEASP.NET :: Storage What SQL Server DBAs Need To Know

clock January 21, 2021 06:56 by author Peter

“One Gerbil, Two Gerbils or Three Gerbils?” is a common DBA joke about server and storage performance. No matter how many gerbils power your storage, you need to know what type they are and the power that they provide. Storage is not about gerbils it is about IOPs, bandwidth, latency, and tiers.

As a DBA it is important for you to understand and know what kind of storage is attached to your servers and how it is handling your data. It is not important to master everything about it, but it is very advantageous to be able to talk to your storage admins or “Gerbil CoLo, LLC” provider intelligently especially when you experience performance issues. Here is a list of things to I encourage you to know and ask.
 
Terminology
IOPs

IOPS stands for I/O (single read/write request) Operations Per Second. This is a performance metric that is dependent on the type of storage being used and can vary widely. It is important to understand how fast your storage can process data by knowing the expected IOPs and the actual IOPs once the array is processing workloads.
 
Bandwidth or Throughput
This is the measure of the size of the data in the I/O request. You can figure out throughput by taking I/O request size multiplied by the IOPs the Measure will be in Megabytes and Gigabytes per second.
 
Latency
In my opinion this is the most important metric to understand. It’s the time it takes to process that I/O request. Its an indicator of a possible storage bottleneck. You measure this time from when the request is issued to when the request is completed. This determines the responsiveness of your storage.
 
Storage Tier & Automatic Storage Tiers
A modern day array can be divided into tiers some of those tiers can be slower spinning disks while others can be fast flash or a hybrid of both. I think of these in terms of gerbils. You can get a small gerbil who has little legs that can run a marathon, a medium one that runs at a moderate speed 5k and a large gerbil that’s a speed racer. These together can work separately (pinned) or merged into a team like in a relay. Your data can be pass like a baton through each tier (automatic). Another words your data can be demoted or promoted between tiers of the storage device when needed for performance and compacity.
 
Performance Metrics
Note these apply to the Guest OS, there are metrics for the Hypervisor/Storage Stack that DBA’s do not normally have access to. The important part is that the different parts of the stack should mainly be in agreement about those numbers. If latency at the array side > latency at the Guest OS level, there is a big issue somewhere

  • Avg. Disk sec/Read – Shows the average read latency.
  • Avg. Disk sec/Write – Shows the average write latency.
  • Avg. Disk sec/Transfer – Shows the combined averages for both read and writes.
  • Disk Transfers/sec - is the rate of read and write operations on the disk.
  • Disk Reads/sec - is the rate of read operations on the disk.
  • Disk Writes/sec - is the rate of write operations on the disk.
  • Avg. Disk Queue Length - is the average number of both read and write requests that were queued for the selected disk during the sample interval.
  • Current Disk Queue Length - is the number of requests outstanding on the disk at the time the performance data is collected.

Storage Types
RAID (redundant array of independent disks)
 
RAID is a solution that protects your data from a disk failure. You tend to hear administrators talk in terms of RAID 0,1,5,6 and 10. As database administrators you need to know what RAID type your data is on. For Tempdb you want it on the fastest RAID as possible RAID 1 or 10 while maintaining disk fault tolerance. This is usually old SANs and no longer a concern with modern storage arrays. Modern arrays take a different approach with object-based storage models, which is more like the cloud.
 
FLASH
High speed storage based on non-volatile memory, you may see it referred to as NVMe, Non-volatile Memory Express. These are SSD, solid state drives. One thing to keep in mind is that NVMe’s are SSDs but not all SSDs are NVMe’s there are different types of SSDs. Not matter what type of SSD it is these are really great for Tempdb workloads.
 
Hyper-converged
This is referred to as HCI. Both the storage, networking and compute are bundled into one. This is a newest all in one hardware that claims to save money and creates ease of use. Keep in mind that these now means the HCI processing power is now handling everything (networking, storage, IOPS, etc).
 
Services
Snapshots

A capture of the state of your data is taken at a point in time. These snapshots can be used as restores or backup copies. These are usually snapshot copies of your mdf and ldf files. Note: Uncommitted transactions are not captured, and snapshots are not necessarily a replacement for backups. If your sysadmin asks about doing snapshots in lieu of backups, it’s your job ask a lot of hard questions to backup or storage vendor who is doing the snapshots and test both the back and more importantly the recovery. You need to be asking about to point in time recovery and how to handle page level restores for corruption just to name a couple.
 
Clones
A volume copy of your data, think of this a disk drive copy. It takes the files a makes a replica from snapshots creating a database copy.
 
Disk Replication (sync and async)
 
The replication of logical disk volumes from one array to another in real time (synchronous) or asynchronous for disaster recovery and continuity.
 
Summary
If you can educate yourself on these it will go a long way to being able to make sure you can have intelligent conversations with your storage admins or providers. This will enable you to better ensure you can advocate for your SQL environment when you experience performance issues related to storage. If your data is hosted elsewhere, like Gerbil Colo, LLC or even in a public cloud like Azure, make sure they can provide the above metrics to you. If they can’t, it might be time to host your data elsewhere.

HostForLIFEASP.NET SQL Server 2019 Hosting



About HostForLIFE

HostForLIFE is European Windows Hosting Provider which focuses on Windows Platform only. We deliver on-demand hosting solutions including Shared hosting, Reseller Hosting, Cloud Hosting, Dedicated Servers, and IT as a Service for companies of all sizes.

We have offered the latest Windows 2019 Hosting, ASP.NET 5 Hosting, ASP.NET MVC 6 Hosting and SQL 2019 Hosting.


Month List

Tag cloud

Sign in