-
-
Notifications
You must be signed in to change notification settings - Fork 553
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migration Data SqL server to Postgres fails me in big tables, the process stuck #1623
Comments
I had a similar issue with the above, running the Docker image in Windows 10. I had tried many ways, including allowing more resources to WSL2 and disabling swap, however the only way I got it not fail due to heap exhaustion for me on big tables was including the following parameter in the WITH clause:
This can be adjusted accordingly to your data volume. |
this is my cpu |
Hi CIO, |
Thanks for contributing to pgloader by reporting an
issue! Reporting an issue is the only way we can solve problems, fix bugs,
and improve both the software and its user experience in general.
The best bug reports follow those 3 simple steps:
show what you did,
I run the pgloader for migrate a database from SQL server to Postgres.
we have issues in the tables with more than 2 million records, I can not migrate big tables the process stuck
show the result you got,
[postgres@rbalvdpost03 Archive]$ pgloader --verbose auditRecordsMaterialized.load
2024-11-08T15:32:21.002000Z NOTICE Starting pgloader, log system is ready.
2024-11-08T15:32:21.010000Z LOG pgloader version "3.6.7~devel"
2024-11-08T15:32:21.157001Z LOG Migrating from #<MSSQL-CONNECTION mssql://sqlpostgres_user@Sqlserver.emea.com:1433/Core.Pack232Sandpit {10067E6D43}>
2024-11-08T15:32:21.157001Z LOG Migrating into #<PGSQL-CONNECTION pgsql://postgres2@Postgreserver.emea.com:5432/Core.Pack232BigTable {10067E7043}>
Max connections reached, increase value of TDS_MAX_CONN
2024-11-08T15:32:21.538003Z NOTICE Prepare PostgreSQL database.
2024-11-08T15:32:21.678004Z NOTICE COPY audittrail.auditrecordsmaterialized with 0 rows estimated [3/4]
Max connections reached, increase value of TDS_MAX_CONN
The console show this, not advances more,
In the Sql server I see the Select consult of the table is supended state and it has async_network
and in Postgres server I see the copy command but it is in idle status.
My load file is this one:
load database
from mssql://sqlpostgres_user:xxxxxxxxx@PostgresServer.emea.com/Core.Pack232Sandpit
into postgresql://postgres2:xxxxxxx@SqlServer.emea.com/Core.Pack232BigTable
WITH include drop, truncate, create tables, create indexes, reset sequences, foreign keys
including only table names like 'AuditRecordsMaterialized' in schema 'AuditTrail'
SET work_mem to '256MB', maintenance_work_mem to '1024 MB';
The tables is not migrated to the postgres server it has 6 millions of records, other tables of the same database and less records have been migrated.
Any idea how I face this.
Best Regards
In the case of pgloader, here's the information I will need to read in your
bug report. Having all of this is a big help, and often means the bug you
reported can be fixed very efficiently as soon as I get to it.
Please provide the following information:
pgloader version "3.6.7~devel"
compiled with SBCL 2.2.10-1.rhel9
```
did you test a fresh compile from the source tree? No
Compiling pgloader from sources is documented in the
README, it's
easy to do, and if patches are to be made to fix your bug, you're going
to have to build from sources to get the fix anyway…
did you search for other similar issues? No
how can I reproduce the bug? No
Incude a self-contained pgloader command file.
If you're loading from a database, consider attaching a database dump to
your issue. For MySQL, use
mysqldump
. For SQLite, just send over yoursource file, that's easy. Maybe be the one with your production data, of
course, the one with just the sample of data that allows me to reproduce
your bug.
When using a proprietary database system as a source, consider creating
a sample database on some Cloud service or somewhere you can then give
me access to, and see my email address on my GitHub profile to send me
the credentials. Still open a public issue for tracking and as
documentation for other users.
The text was updated successfully, but these errors were encountered: