"Failed to read file" when loading parameter file in VuGen

877 views
Skip to first unread message

Kent Rune Klungerbo

unread,
Nov 4, 2014, 10:46:55 AM11/4/14
to LR-Loa...@googlegroups.com
I am experiencing an issue with loading parameter values from a file in LoadRunner 12.01 (experienced same issue with 11.52).

I have just created a LoadRunner script in Oracle 2 tier protocol, not recorded anything yet - and I've set up a list of parameters that collect values from a file. Into this file I've copied several rows of test data that are comma-separated (data includes letters, numbers and a few special characters).

The issue: Somewhat randomly I get an error message after updating the .dat file with more data (popup in VuGen):

Failed to read file
C
:\.....\datafile.dat from line xx, check file format.

With some sets of rows it works fine, data is distributed in rows and columns for my parameters to chose values from, but sometimes I get above error when adding new rows to the .dat file via Notepad (also tried editing the file with Notepad++ directly).

I've tried to save the file in the different file formats available (ANSI, Unicode, UTF-8), but it does not help. I've also tried to use Notepad++ to convert data to ANSI (since it seems to be the standard file format used by VuGen when creating this .dat-file), but it does not help. And I am not able to see any discrepancy in the "line xx" that differs it from other lines in the data file.

These are 3 rows from the file, of which the middle row is "line xx" in the last occurrence of the error:

0546,5310635479852288,704825,0.360000,0.625000,2014-09-29 12:06:39,644332,11:58:27,2015-02-03,2015-03-01,7311,250,51,445,7,58453064521,29594076265,sbE,SYbgqOzuoCGc,sboRlS,00,vCBcPygDjwTQGNeI,59524233,OPTIK TESTDATA&SCHM ALTENMARKT/PO AT,V,511,978,l,rxERnCzSmBDPfIjAVgaewdYWJcOQXKNvbls,631228925444144640,531063LXf3iO3w8t2288
5412,5542578426517982,761850,2.945000,190.000000,2014-10-22 02:02:33,501418,22:25:17,2015-04-05,2015-03-10,5399,380,51,615,78,25220576545,29384400797,KqI,whQIyfzPNLWM,hfFZqO,00,IHnNxKFqYgyGXTct,99213186,VB RB ROBOschlesien  Markerodor    DE,i,978,978,i,bxVDgEvKUiZJlTkhSwczdRneyQIpOuHfLoa,833145215889534208,531063tC1d5BSGta7982
6875,6521458164986744,196925,0.270000,1.110000,2014-10-26 15:55:38,110208,17:39:07,2015-05-28,2015-03-01,5999,900,801,936,51,25206851826,63652228016,zqw,BQFveWrdtPay,CWZRvE,00,dxcQmizqtpwOvnJo,00903106,HVB,,n,978,978,D,sCvfaIUPtJRZOLnKrHybxDmEYVpeQMjulci,276871249734249152,531065yWPzSF0krK6744
I've replaced some digits and letters to protect the test data.

I am unable to see why the middle row would cause any issue, and there doesn't seem to be any clear consistency in what triggers these errors (I've tried with different sets of rows, I have base set of 500.000 rows to chose from, although I only copy less than 100 rows into the data file at the same time).

Any idea what the cause might be?

James Pulley

unread,
Nov 5, 2014, 10:00:03 AM11/5/14
to LR-Loa...@googlegroups.com
I took your data and pulled it into Excel as a comma delimited data source.   You have a variance in your three lines beginning at column Y in the spreadsheet.    This variance in number of columns per row may be the root of your issues.

James Pulley
http://www.perfbytes.com     The Podcast of Software Performance
http://www.loadrunnerbythehour.com     Honest, effective, onshore LoadRunner services
http://www.litesquare.com     A new bargain in performance engineering.   We do the work & you only pay when there is a benefit

Richard Bishop

unread,
Nov 9, 2014, 11:25:03 AM11/9/14
to LR-Loa...@googlegroups.com
Good spot Mr Pulley :-)

Double comma after HVB.......

Kent Rune Klungerbo

unread,
Nov 11, 2014, 2:44:18 AM11/11/14
to LR-Loa...@googlegroups.com
Hmm yes, seems that is the issue for this line, although I am still having issues with the data file (and with rows without double commas). Anyway, this is a 500.000 row .dat-file, I fear that LoadRunner won't handle it that well with 250ish Vusers. I once came across a forum post with a link to a LoadRunner addon for handling big data sets for parameterization, but I can't find it. Do you have any suggestion on how to deal with very large data sets for LR parameterization? I have a feeling this should be in its own post...

James Pulley

unread,
Nov 11, 2014, 3:41:43 PM11/11/14
to LR-Loa...@googlegroups.com
Not an issue with Loadrunner, but perhaps an issue with the amount of available RAM on your load generators, for this file has to be loaded into memory at the beginning of the test. This is done to deliberately avoid disk contention from multiple virtual users during the test.   None of the LoadRunner code is marked as non-swappable, so you can face the issue of all of these items being swapped into and out of memory if you are in a low physical RAM condition.

rule of thumb, go with the smallest parameter file to get the job done.   A data file of 500K rows is overkill.   Consider parameters of first name, middle initial and last name picked randomly of files 100,26 & 100 in depth produces a combination of 260,000 names.   Add in a random address and zip and you have plenty of data.

if you are loading a phone book, which apparently is the case here from a size perspective,, you are likely engaging in an action which can be more efficient in nature.

abhishek agarwal

unread,
Dec 1, 2014, 11:46:15 AM12/1/14
to LR-Loa...@googlegroups.com
Thanks, I also got my problem resolved.

Krunx 2

unread,
Dec 7, 2016, 10:01:49 AM12/7/16
to LoadRunner
For those of you still facing similar problem, there is an 'Import Parameter' option in the parameter list window. This imports input from a source file with value of different encoding format and saves into the parameter file. 
Reply all
Reply to author
Forward
0 new messages