[Convert Sql Server Database To Xml Download

0 views
Skip to first unread message

Addison Mauldin

unread,
Jun 12, 2024, 5:54:40 AM6/12/24
to outucepiz

When migrating databases from MS SQL to MySQL server it is often necessary to translate MS SQL queries according to MySQL syntax as well. Syntax of SQL queries in MS SQL and MySQL are similar but not identical. This article discovers 10 most popular differences between MS SQL and MySQL syntax. The target audience for this guide should have general database management knowledge and experience in composing SQL queries.

convert sql server database to xml download


DOWNLOAD ››› https://t.co/ABX41h8Jyu



Sometime MS SQL table or column names are enclosed in square brackets in queries (e.g. if contains spaces or for some other reasons). MySQL does not allow square brackets around table of column names, they all must be replaced by ` symbol or cut off: [object] -> `object`.

MS SQL provides effective solution to avoid naming objects conflict and to manage user permissions on data access. This is schema, a logic container used to group and categorize objects inside the single database. When using schema the full name of database object in query may look like database.schema.object. However, there is no such semantic in MySQL, so all schema names must be cut off from queries.

CONVERT() function is used to convert an expression of one data type to another in MS SQL. In MySQL CONVERT() function converts text data between different character sets. However, there is equivalent function CAST(), so every occurrence of convert(type, expression) in MS SQL query must be replaced by cast(expression AS type) in MySQL query. LEN() function returns length of string expression in MS SQL. MySQL equivalent for this function is LENGTH(). MS SQL function DATEADD() adds interval to the specified part of the date. MySQL operator '+' can do the same as follows:

MS SQL operator '+' allows to concatenate strings like this: 'string1' + 'string2'. In MySQL such expressions must be replaced by CONCAT('string1', 'string2'). MS SQL function CONTAINS(expression, template) searches for matches of template inside expression. MySQL has operator LIKE that implements the same semantics: expression LIKE %template% If MS SQL query contains 'TOP (100) PERCENT' pattern just cut it off when composing MySQL query. If there is another percentage amount in that pattern, it can be replace by the following code in MySQL (works in MySQL 5.0.7 and higher):

PHPmyadmin option is nice for doing this job. But sometimes you will see errors while converting. We actually studied the DB structure of the MSSQL and wrote our own mysql statements based on it and did our unit testing and learned few things. So apart from conversion if you also want to do hands-on this is a good approach.

The simple answer is; you can not "convert" an Access database to a SQL database. You can, however, import the Access database (tables only) into SQL. Remember that SQL is a true database, and, as such, contains no front end or GUI creation mechanisms. You will still need Access (or C# or VB or another front-end builder) to create the interface.

You may already know this, but in SQL Server a Query is called a View, and Modules are called Stored Procedures. You will need to convert your Access queries and modules accordingly if you're planning on having that all reside server-side.

It's actually pretty easy to import Access tables into SQL Server, you would just create a database on a server, right-click on the database name and choose Tasks --> Import Data. There you will choose Microsoft Access as your Data Source. The rest should be pretty self-explanatory.

JonnyBones wrote a good full answer. Other have made suggestions for tools which may work. However, the SSMA (SQL Server Migration Assistant) is likely to be the best option now. (See this youtube video for help with understanding some of the issues to overcome and an explaination of how to use SSMA.

Be aware that the MS Access Upsizing wizard was great but is now discontinued, and there was a bit of a gap before MS created a decent version of SSMA, which is when&why these other tools came into existence.

Our HAProxy1 logs aka traffic logs, are currently stored on two SQL Servers, one in New York, and one in Colorado. While we store a minimal summary of the traffic data, we have a lot of it. At the beginning of 2019, we had about 4.5 years of data, totaling about 38TB. The database was initially designed to have a single table for each day. This meant that at the start of 2019 we had approximately 1600 tables in a single database with multiple database files (due to the 16TB size limit of data files). Each table had a clustered columnstore index, which held anywhere from 100-400 million rows in it.

As I mentioned, we were dealing with very little free disk space on both servers. This data is primarily used by the developers for investigating issues and by the data team for analysis. We already had purged some data, but for the data team - more data is better.

I started the migration in Colorado, on January 14, 2019 (yes, I know the exact date). Pulling millions of rows of data from spinny drives, and then inserting it back into new tables on those same spinny drives was slow. By day 2, knew this was going to be painful. My original plan was to run the migration in Colorado first, then in New York. However, once I saw how slow each table was taking, I decided to run them simultaneously. About a week later, I kicked off the New York data migration.

The first issue was with the C# script. The script periodically timed-out when querying the database. After adjusting the settings, I crossed my fingers and hoped it would continue to work, which it did, for the most part.

Let me take a quick step back and explain the old process. The original table structure aka the daily tables were rowstore with a generic identity (ID) column to make each row unique. After 14 days, we would drop the primary key from the table, and add a clustered columnstore index. Even though the conversion to a clustered columnstore occurred, the ID column would remain.

In the new table, we dropped the ID column. Dropping this column meant, trying to figure out the uniqueness of the rows inserted before a failure would be incredibly difficult. It was far easier, in the event of a failure, to just delete everything from the table for that day, and everything after that day (if it moved to another table), and restart the process.

I went back to the drawing board to figure out some way to move the data faster. Obviously, I wanted to follow the same steps - insert old data, verify the row count matches, move to the next day, repeat - I used the C# script as my model - I mean it worked, it was just a little slow. I tried a bunch of different things, none were blazingly fast:

When your servers are already limited in free space and you have to move 40TB of data around on the same server, it becomes a little like whack-a-mole to get things moved. I had to get creative to free up space while continuing to move all the data to the new format.

After moving each month, I would do a final validation, then it was time for some cleanup. This included deleting all of the daily tables for the month that were just migrated, as well as dropping the OriginalLogTable column from the monthly table in the new database. The daily tables were no longer needed because we had the data in the new table, so getting rid of duplicate data would allow me to gain back disk space…sort of.

As I moved the data from the old files, I was increasing the size of the new data files. This was all being done on the E: drive that was 85-90% full, which I mentioned above. I was constantly trying to keep the used disk space in check.

Each month of data was anywhere from 300GB to 1TB in size, and the 3 old data files were approximately 12TB each. At the end of each migration, I tried to shrink the old database files by the amount we just deleted. This took several attempts to get something that worked well.

I had a couple of options to free up space on the D: drive - move tempdb or move the other log/database files to our newly configured F: drive with 14TB of free space. I decided to move tempdb because it was as easy as executing the following command and restarting the SQL service.

Unfortunately, the only issue with doing this was once I filled the 14TB of disk space, I had to migrate the datafiles all back to the spinny drives. Which I did at the end of March 2019 after I ran out of drive space.

In early May 2019, I was still plugging away at migrating data. At this point, I was back to where I started - copying data from the old database on the spinny drives, inserting it into the new database, deleting the old data, and then shrinking the files all on the same drives.

After months of doing this, I was seeing a noticeable slowdown in the process. The read/write delays on the drives had increased since January. The average delays of reads were over 100 milliseconds and about 80 milliseconds for writes.

The plan was to get the new servers in June/July, but the purchase was delayed several months. After respeccing them to have all NVMe SSD storage, they finally arrived in late September. To make the final migration of the new hardware as easy as possible, from June through the end of September, I moved one table a day, keeping things up to date.

I would be moving from server to server within the same datacenter and rack, so I wanted to see how fast it would be to use various methods to bulk insert the data from the old database to the new one. I tried:

I really, really, really did not want to move hundreds of tables again, so I threw out the idea of using an availability group to automatically seed the databases to the new servers. We already have AGs throughout our infrastructure, so I was familiar with using them, but I was unsure if it would successfully work for a database of this size.

Before initiating the failover, I ran a DBCC CHECKDB on the new server aka the secondary in the availability group to be sure the database was in a good state, and after 22 hours it completed with no issues reported.

795a8134c1
Reply all
Reply to author
Forward
0 new messages