Data Flow Task Plus - Multiple SQL views to multiple flat_files

563 views
Skip to first unread message

leighan...@gmail.com

unread,
Dec 19, 2016, 9:54:20 AM12/19/16
to COZYROC

I'm trying to use Cozy Roc SSIS Data Flow Task+ to dynamically iterate through a list of SQL views and put the results into pipe-delimited text files. (One pipe-delimited text file per SQL view). I have set up all the variables and For Each Loop structures in order to iterate through the views and dynamically set the text file output. My issue is that the sample videos are all going from flat-files into OLEDB and I'm going the other way around. In my OLEDB Source, I've used the Advanced Editor to remove all the columns except one and to rename it THUNK_COLUMN. I've removed them from both the Source Output "External Columns" and "Output Columns." SSIS will not allow you to remove the other column names from the "Source Error Output." I tried renaming them all to notused, but that's not working either. On the flat file destination, I have it set to an expression that determines the filename/path dynamically, too. In that one, with the Show Advanced Editor tab open, I've mapped THUNK_COLUMN to THUNK_COLUMN. I have "Validate Metadata" set to False on both the source & destination components. On the actual CozyRoc Data Flow Task Editor itself, I have the source and destination set to enabled.



When I execute the package, I get the error that the "OLE DB Source Error Output].Columns[notused] on the error output has no corresponding output column on the non-error output."  This is because it won't let me remove it.


Can anybody tell me if I'm trying the right approach or not? Will I be able to use an OLEDB Source the same way they did in the "Basics" video with the flat file source and remove the excess columns?

I also tried thesample package contributed by Paul McMillan, demonstrating dynamic data flow from the source to the destination database and driven by metadata. He's going OLEDB to OLEDB, though. I can't find anybody going OLEDB to flat file so that I have a reference. Any  help is greatly appreciated.

Ivan Peev

unread,
Dec 19, 2016, 9:59:11 AM12/19/16
to COZYROC
Hi Leigh Anne,

Remove the mappings in the 'Column Mappings' tab, prior to the columns deletion. This will remove the columns from the error output.

leighan...@gmail.com

unread,
Dec 19, 2016, 10:35:21 AM12/19/16
to COZYROC
Ivan,
Thank you. I got beyond that error now. This time it fails on the ODBC source stating the column "THUNK_COLUMN" cannot be found in the datasource.
Can you tell me if I'm at least on the right track? Can I use the example from the video here and do the reverse? (She was going csv to oledb. I need to go oledb to txt.) Is it correct that all I have to do in the OLEDB source is remove all columns but one and rename it THUNK_COLUMN and then in the text file output, I just have it going to a pipe-delimited file and I map THUNK_COLUMN to THUNK_COLUMN? Or do I have to do the approach from Paul McMillan where I get the metadata and create a dynamic expression for the column that includes r\d. I just need something to follow. I feel like I"m very close.

Ivan Peev

unread,
Dec 19, 2016, 10:38:02 AM12/19/16
to COZYROC
Leigh Anne,

Right-click on the component and select Properties. Set ValidateExternalMetadata=False. Also set DelayValidation=True on the Data Flow Task Plus. Try now to execute.

leighan...@gmail.com

unread,
Dec 19, 2016, 10:55:38 AM12/19/16
to COZYROC
I had already set that after watching the video and reading the help blogs. I tried recreating the whole DFT again. This time, I'm getting
[Flat File Destination [228]] Error: Data conversion failed. The data conversion for column "ProvMatchData" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".

The ProvMatchColumn was one of the column names before I removed them and used THUNK_COLUMN. I've got my THUNK_COLUMN as a String 8000 with code page 1252. That is just what it did when I set it up. Do I need to change this?

Ivan Peev

unread,
Dec 19, 2016, 11:11:02 AM12/19/16
to COZYROC
Leigh Anne,

Is your source data Unicode? If that is the case, you have to setup your destination flat file as Unicode, too.

leighan...@gmail.com

unread,
Dec 20, 2016, 9:55:28 AM12/20/16
to COZYROC
Ivan,
I finally got it to work, but I'm not sure why it's working. I just kept re-building the DFT+ task over and over until it finally worked. It's now exporting the multiple views to multiple flat files. When I look at the Dynamic tab of the Data Flow Task Plus editor, I have the Flat File Destination Enabled, but I don't have anything set for Mapping or Column Delimiter. I set the properties of the connection string on the flat file to an expression using a variable. I don't know if that's why this part wasn't necessary or not. Can you tell me why it is working like this? I want to understand what I did that "fixed" it. I'm including a screenshot.


DynamicEditor.jpg

Ivan Peev

unread,
Dec 20, 2016, 10:03:18 AM12/20/16
to COZYROC
Leigh Anne,

Congratz! ;) Without knowing what you did have setup in the previous packages, it is hard to tell what is wrong. The Mapping parameter has to be setup only if the destination columns differ from the source columns. The ColumnDelimiter parameter has to be setup, but if you are setting it with an expression on the connection manager, it should be fine.
Reply all
Reply to author
Forward
0 new messages