Information: 0x4004300A at Import to recordset, SSIS.Pipeline: Validation phase is beginning.
Error: 0xC004701C at Import to recordset, SSIS.Pipeline: Add definition Ids.Inputs[Lookup Input].Columns[THUNK_COLUMN] has lineage ID 106 that was not previously used in the Data Flow task.
Error: 0xC004706B at Import to recordset, SSIS.Pipeline: "Add definition Ids" failed validation and returned validation status "VS_NEEDSNEWMETADATA".
Error: 0xC004700C at Import to recordset, SSIS.Pipeline: One or more component failed validation.
Error: 0xC0024107 at Import to recordset: There were errors during task validation.
// Parse the column names into a string array, which we'll use in a select to unpivot the result.
string column_names = (string)Dts.Variables["User::column_names"].Value;
string[] column_array = column_names.Split(',').Select(sValue => sValue.Trim()).ToArray();
// Select each row and column to unpivot the result into a generic list, while converting the values to strings.
var resultSet = dt.AsEnumerable()
.Select(row =>
column_array.Select(
col_name =>
new
{
field_name = col_name,
field_value = Convert.ToString(row[col_name])
}
)
).SelectMany(x => x).ToList();
This may not perform optimally for big data, but my result sets are numerous and small, so gathering them together temporarily into a recordset works well. I then read from it into a new datatable, then merge it into another datatable I'm holding in a package variable. I do this at the end of each query within the loop, so I finish with one datatable that contains all my unpivoted results that I send to a proc to bulk insert within a transaction.
Thanks for responding. Starting simple and adding components back in one at a time is good advice.