I am trying to bulk export out of a sql table, the code works fine. Nothign wrong with it at all, the problem resides in exporting 50 columns at 2.7million rows. Can anybody help me with the OUtofMemory exception. I know why it’s happening because it tried to read it all to memory then write, but i’m not sure how ot break it up to read line by line then write out without taking forever.
string contents = File.ReadAllText(textBox3.Text);
string tableid = tableid = Regex.Match(contents, @"CoreDBCaseID=(?<id>\d+)").Groups["id"].Value;
string server = server = Regex.Match(contents, @"Server=(?<Server>[^;]+)").Groups["Server"].Value;
string security = security = Regex.Match(contents, "Security=(?<Security>[^;]+)").Groups["Security"].Value;
string database = database = Regex.Match(contents, "Database=(?<Database>[^\r]+)").Groups["Database"].Value;
string[] data = new string[] {
string.Format("Table={0}", tableid),
string.Format("Server={0}", server),
string.Format("Security={0}", security),
string.Format("Database={0}", database),
};
string sqltable = ("TDS_C" + tableid + "_table");
String cols = String.Join(",", listBox1.Items.Cast<String>().ToArray());
string sql = "select " + cols + " from " + sqltable + " where *;
SqlConnection con = new SqlConnection("Data Source=" + server + ";Initial Catalog=" + database + ";Integrated Security=" + security);
con.Open();
SqlDataAdapter tabadapter = new SqlDataAdapter(sql, con);
DataSet dataset = new DataSet();
tabadapter.FillSchema(dataset, SchemaType.Source, sqltable);
tabadapter.Fill(dataset, sqltable);
DataTable tbltarget = dataset.Tables[0];
string output_text =
tbltarget.Columns.Cast<DataColumn>().ToList()
.Select(col => col.ColumnName)
.Aggregate((current, next) => current + "|" + next) + "\r\n"
+
tbltarget.Rows.Cast<DataRow>().ToList()
.Select(row => row.ItemArray.Aggregate((current, next) => current.ToString() + "|" + next.ToString().Replace("\n", "")))
.Cast<string>().Aggregate((current, next) => current + "\r\n" + next);
File.WriteAllText(@"C:\Export.txt);
con.Close();
Don’t fill a dataset. Instead, use a data reader, then you can stream that data out as you read it in, meaning a much smaller memory footprint.
You could look into doing async reads from the db and have that trigger an async write to the file. This would stop each read-write being a fully blocking operation.
Simple example: