How can we run the Selenium-Specflow tests using SpecRun (Migration from MSTest) on TFS build server ?

945 views
Skip to first unread message

Vemula Raju

unread,
May 19, 2016, 3:04:40 AM5/19/16
to SpecRun
Please explain me the step-step by procedure on how to run the Selenium-Specflow tests using SpecRun on TFS build server as nightly Run? I have paid version of SpecRun
  • We were using MSTest as unit test provider 
  • Installed test agent when we were running the tests using MStest as unit test provider
  • We had no problem in running the testing using MSTest as unit test provider
We now changed the unit test provider as SpecRun and following are the environment details:
  • Specflow v2.0.0.0
  • SpecFlow+ 1.3.0
  • TFS Build Server v2015
  • Unit Test Provider = SpecRun
  • TFS Build template using TfvcTemplate.12.xaml
  • Please refer to the attached sample of TFS build template we using


Please clarify following:
  • Please clearly point me to any changes required to TFS build template ?
  • Is there where we need to specify with in build template configuration about the SpecRun? 
  • We think there is problem with build service interaction with UI and all the test were failed 
  • With our understanding, we believe that, it is needed to allow interactive services

It would be greatly appreciated if you somebody could please respond to this ASAP.

Thanks,
Raj

Stephen McCafferty

unread,
May 19, 2016, 10:46:57 AM5/19/16
to SpecRun
Hi Raj,

There are a couple of things you need to do and a known issue as well.

In order for the tests to execute, the binaries (DLLs in the various SpecFlow packages) need to be available to the build agent. You either:
  • Need to ensure that NuGet restore is enabled, in which cases the files are downloaded automatically
  • Need to check in the DLLs in the corresponding SpecFlow, SpecRun and SpecFlow+ packages. These DLLs are located in the corresponding sub-folders of your solution's packages directory.


There is a known issue caused by the XAML build agents caching the test adapters. This means that you can only use a single version of SpecFlow+ Runner with the same build agent. If you have more than one version of SpecFlow+ Runner installed, you need to remove the additional versions to ensure that the latest version is used, instead of the cached version.


If you need to use different version for different builds, you will need to define separate build agents for each build.

Vemula Raju

unread,
May 19, 2016, 9:49:21 PM5/19/16
to SpecRun
Thanks Stephen, 
Following two points are taken care of:
  • Need to ensure that NuGet restore is enabled, in which cases the files are downloaded automatically
  • Need to check in the DLLs in the corresponding SpecFlow, SpecRun and SpecFlow+ packages. These DLLs are located in the corresponding sub-folders of your solution's packages directory.
Can you please let me know more on the sample.runsettings that we need to use during the tfs build nightly run execution ? 
Or anything else we are missing to run Selenium tests using SpecRun.

Stephen McCafferty

unread,
May 20, 2016, 6:46:31 AM5/20/16
to SpecRun
If the name of your .srprofile file is not TFS.srprofile or default.srprofile, you will need to update your your run settings accordingly (<Profile> element) and enter the path to your .runsettings file in the Run Settings File field of your build definition (under Automated Tests in your screenshot). You should not need to make additional changes to the run settings, but if you are still having issues, can you tell me what your symptoms are?

In case you haven't looked yet, more details on the run settings can be found here and information on building with TFS 2015 can be found here.

Tabor Lechner

unread,
Jun 7, 2016, 3:25:57 PM6/7/16
to SpecRun
Our setup is the same way.  We were using MSTest, along with a TFS build server and using a test controller/test agents to distribute tests across 5 test agent servers.  We had a .testsettings file that stated what server the test controller was on.  

I have changed the unitTestProvider to "SpecRun+MsTest", made a few changes to the Default.sprofile (To increase the number of concurrent tests), and can run this locally with no issues.  However, when I try to setup a new build definition nothing runs.  I have created a .runsettings file, and added the <MSTest> section to point to the .settingsfile.  It looks like this below:

<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
  <!-- Configurations that affect the Test Framework -->
  <RunConfiguration>
    <MaxCpuCount>1</MaxCpuCount>
    <!-- Path relative to solution directory -->
    <ResultsDirectory>.\TestResults</ResultsDirectory>
    <!-- [x86] | x64  
      - You can also change it from menu Test, Test Settings, Default Processor Architecture -->
    <TargetPlatform>x86</TargetPlatform>
    <!-- Framework35 | [Framework40] | Framework45 -->
    <TargetFrameworkVersion>Framework45</TargetFrameworkVersion>
    <!-- Path to Test Adapters -->
    <TestAdaptersPaths>%SystemDrive%\Temp\foo;%SystemDrive%\Temp\bar</TestAdaptersPaths>
  </RunConfiguration>


  <!-- Adapter Specific sections -->

  <!-- MSTest adapter -->
  <MSTest>
    <SettingsFile>RegressionTest.testsettings</SettingsFile>
    <ForcedLegacyMode>true</ForcedLegacyMode>
  </MSTest>
  
  <!-- Configurations for SpecFlow+ Runner -->
  <SpecRun>
    <Profile>Default.srprofile</Profile>
    <ReportFile>CustomReport.html</ReportFile>
    <GenerateSpecRunTrait>false</GenerateSpecRunTrait>
    <GenerateFeatureTrait>false</GenerateFeatureTrait>
  </SpecRun>

</RunSettings>

I had setup the build definition for the run settings file to point to the .runsettings file.  However, whenever I start a build on that definition, nothing runs.  I can see that the build controller is handling the build, and passing it off to the test controller, but the test agent is not running any tests.  There has to be something small I'm missing, but I'm not sure what it is.  Any help would be greatly appreciated.  We are doing a trial of Specrun right now, and could see us purchasing licenses for it if we can get it to work with our ability to run scheduled regression tests via TFS builds. 

Andreas Willich

unread,
Jun 7, 2016, 4:52:49 PM6/7/16
to SpecRun
Hi Tabor

Please check that the NuGet packages are completly checked in (with dlls) or that the NuGet package restore works on your build server.
If the SpecFlow+Runner Test Adapter is missing, the agents can not execute any tests.

Did you also checked if you did all according to http://www.specflow.org/plus/documentation/SpecFlowPlus-and-Build-Servers/
Especially if you are using the XAML Build agents, have a look at the known issues.

If you still have problems, please could you provide more detailed information about your build setup?
.) TFS Version
.) Build System (XAML or new ones)
.) your Default.srprofile

Best regards
Andreas

--
You received this message because you are subscribed to the Google Groups "SpecRun" group.
To unsubscribe from this group and stop receiving emails from it, send an email to specrun+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Message has been deleted

Tabs

unread,
Jun 7, 2016, 5:44:01 PM6/7/16
to SpecRun

Hello, thank you for the quick response.  I believe the issue might be the test adapter is missing as I don't recall setting that up anywhere, and could explain why the agents are not running anything.  I also haven't seen reference to taht that in any of the documentation?  Is that a separate install from SpecRunner?   

To answer some of the other questions, I have verified all DLLs exist where they should when the project is built on the build servers. They are checked into the project also.  We are using TFS 2015 with the old XAML build definitions.  

Here is the Default.srprofile
<?xml version="1.0" encoding="utf-8"?>
  <Settings projectName="RegressionTests.IntegrationSuite" projectId="{86b72a39-ef18-46ff-a8e4-2b8c3ed78ac4}" />
  <Execution retryFor="Failing" retryCount="2" stopAfterFailures="0" testThreadCount="5" testSchedulingMode="Sequential" />
  <!-- For collecting by a SpecRun server update and enable the following element. For using the 
      collected statistics, set testSchedulingMode="Adaptive" attribute on the <Execution> element.
    <Server serverUrl="http://specrunserver:6365" publishResults="true" />
  -->
  <TestAssemblyPaths>
    <TestAssemblyPath>RegressionTests.IntegrationSuite.dll</TestAssemblyPath>
  </TestAssemblyPaths>
  <DeploymentTransformation>
    <Steps>
      <!-- sample config transform to change the connection string-->
      <!--<ConfigFileTransformation configFile="App.config">
        <Transformation>
          <![CDATA[<?xml version="1.0" encoding="utf-8"?>
                <connectionStrings>
                  <add name="MyDatabase" connectionString="Data Source=.;Initial Catalog=MyDatabaseForTesting;Integrated Security=True" 
                       xdt:Locator="Match(name)" xdt:Transform="SetAttributes(connectionString)" />
                </connectionStrings>
</configuration>
]]>
        </Transformation>
      </ConfigFileTransformation>-->
    </Steps>
  </DeploymentTransformation>
</TestProfile>



Thanks Again,
Tabor

Vemula Raju

unread,
Jun 8, 2016, 1:32:49 AM6/8/16
to SpecRun
Hi all,

Can I please ask one of the out of the context question ? With the multiple threading, how do we handle multiple threads talking to the same database ?? 

Can you please be bit elaborate on how the following configuration works:

<connectionStrings>
                  <add name="MyDatabase" connectionString="Data Source=DatabaseServerName.;Initial Catalog=DatabaseForTesting;Integrated Security=True" 
                       xdt:Locator="Match(name)" xdt:Transform="SetAttributes(connectionString)" />
                </connectionStrings>


Thanks a lot.
Raj

Vemula Raju

unread,
Jun 8, 2016, 1:39:41 AM6/8/16
to SpecRun
To run the tests on build server allow interactive services on your build server that will do the Magic. It worked from me. 


On Wednesday, 8 June 2016 07:44:01 UTC+10, Tabs wrote:

Andreas Willich

unread,
Jun 8, 2016, 3:36:02 AM6/8/16
to spe...@googlegroups.com
The test adapter is part of the SpecRun.Runner package (https://www.nuget.org/packages/SpecRun.Runner/). You have this installed, because it contains the Default.srprofile.
As you are using SpecRun+MsTest as unittestprovider and you have upgraded to 1.4, please upgrade again to 1.4.1-rc001 (prerelease- version). There is a bug in 1.4 when you use SpecRun+MsTest or SpecRun+NUnit.

Did you check the known issues for TFS XAML builds?

Known Issues

  • The build agents cache the test adapters, which means that the last test adapter to be used is used for each build. You can thus only use a single SpecFlow+ Runner version with the same build agent, as the cached version is always used. If you want to use a different version of SpecFlow+ Runner for different builds, you need to define separate build agents.
  • Upgrading SpecFlow to a newer version requires a restart to purge the cache.
If you do not restart the build agents after upgrading SpecFlow+Runner, the behavior is, that it finds no tests to execute.

Andreas Willich

unread,
Jun 8, 2016, 3:53:38 AM6/8/16
to spe...@googlegroups.com
Hi Raj

You use standard XDT in the ConfigFileTransformation. This is applied to the given configFile.
You can use additional Placeholders. See the available ones here: http://www.specflow.org/plus/documentation/SpecFlowPlus-Runner-Profiles/#Placeholders

If you want different deploymentsteps on your build server and local, you could use multiple srProfile- files.

About your database:
This is a shared resource for your tests. 
With multi threaded tests I think you have following options with different difficulties:
  • create a separate database for every test thread (easy)
    you could use the ConfigFileTransformation and the placeholders to change the Connection string for every thread
  • mock your database, that you are no more dependend on it (hard)
  • choose the testdata in your tests so that you have no overlap between the tests (hard)
There are probably other solutions also possible. These 3 came to my mind at the moment.


Best regards
Andreas

Tabs

unread,
Jun 8, 2016, 9:18:36 AM6/8/16
to SpecRun
I see.  Does the test adapter need to be installed on all the test agents running tests? Or does simply being checked into the project and built on the build server handle that?  

I have updated to 1.4.1-rc001 and the issue still persists. I did check those two known issues and made sure everything was restarted, the build server and all build agents.  

I will try setting up a build using the new TFS 2015 build definition and see if I have better luck there if there are no other options. 

Tabs

unread,
Jun 8, 2016, 9:22:05 AM6/8/16
to SpecRun
I turned this on everywhere and no luck. 

Vemula Raju

unread,
Jun 8, 2016, 8:02:51 PM6/8/16
to SpecRun
Thanks Andrews for the reply.

Can you please explain bit more on how the first(Easy) option works.
  • create a separate database for every test thread (easy)
    you could use the ConfigFileTransformation and the placeholders to change the Connection string for every thread
What do you mean by creating a separate database for every thread - how that works ? What are all the configruartion changes required say my DB Server = My_DB_Server and DB Name = My_DB_Name ?

Thanks,
Raj

Stephen McCafferty

unread,
Jun 9, 2016, 7:42:43 AM6/9/16
to SpecRun
You can apply transformations to the configuration file used by SpecFlow+ to define different configuration settings, e.g. for different test threads.There are a number of placeholders that can be used to when applying transformations to the config file. In the case of multiple threads, the placeholder of most interest to you is called {TestThreadId}. This placeholder is replaced by the numeric index of the thread when applying the transformation to the configuration. So the value will be 0 for your first thread, 1 for the next thread etc.

You can use this placeholder to apply a transformation to the database connection string that includes the thread's ID (unique identifier) in the name of the database instance you want to connect to. So as long as you have multiple database instances with similar names whose only difference is the numeric identifier, you can access a separate database instance in each thread. For example, you have 4 threads, and 4 database instances:
Instance0
Instance1
Instance2
Instance3

If you transform the database connection string in the configuration file to become "Instance{TestThreadId}", this will automatically transform the configuration file for each thread, and each thread will access the corresponding database instance. This ensures that the threads are not accessing the same database at the same time.

There are a number of other placeholders available, you can find them at the link Andi posted above.

You might want to take a look at this video, which illustrates how to access a different database instance in different test threads. While the video was made for version 1.2, the concept itself hasn't changed.

Andreas Willich

unread,
Jun 9, 2016, 10:13:03 AM6/9/16
to spe...@googlegroups.com
The test adapter should be restored with the NuGet restore and the test executor should find and use it.

 If you have still problems, is it possible that you send me the project, so that I can have a look at it?
Are you trying out the runner in a playground project?   

If you do not want to share it public, you could also send it to sup...@specflow.org.

Tabs

unread,
Jun 13, 2016, 4:39:28 PM6/13/16
to SpecRun
Sorry for the late reply, I have been out.  Unfortunately I am trying this out on our main project so I will be unable to send that.  I can try to get a little test project up and running to send over sometime this week.  What would be the best way to send that?

Thanks,
Tabor 

Stephen McCafferty

unread,
Jun 14, 2016, 10:49:51 AM6/14/16
to SpecRun
As long as the project is not massive (highly unlikely for a test project), you can send it to sup...@specflow.org, as Andi suggested.

You can obviously also upload the project and send us a download link if you prefer.

Vemula Raju

unread,
Jun 19, 2016, 10:00:18 PM6/19/16
to SpecRun
Thanks Stephen,

My application is deployed on a VM. How it would make difference if I am changing the app config file within my SpecFlow/selenium project in reality my application is deployed on a VM with config files. How that config files will get changed during the run.

My specflow/Selenium project config somthing like below

  <appSettings>
    <add key="Browser" value="Chrome" />
    <!-- Chrome, IE, Firefox -->
    <!--<add key="Url" value="" />
    <add key="FhirServerUrl" value="" />
    <add key="ClearIndexUrl" value="" />-->
    <add key="Url" value="/" />
    <add key="FhirServerUrl" value="" />
    <add key="ClearIndexUrl" value="" />
    <add key="IdentityServerUrl" value=""/>
    <add key="NotUseSelfHosting" value="true" />
    <add key="ClientId" value=""/>
    <add key="SeleniumDriverPath" value="file://c:\selenium\drivers" />
    <add key="TestDataPath" value="Test.json" />
    <add key="ClientSettingsProvider.ServiceUri" value="" />
  </appSettings>

Stephen McCafferty

unread,
Jun 21, 2016, 9:07:06 AM6/21/16
to SpecRun
If you look at the file default.srprofile, you will notice a section called DeploymentTransformation that includes a child Steps element that is commented out by default.

It looks like this:

  <DeploymentTransformation>
   
<Steps>
     
<!-- sample config transform to change the connection string-->
     
<!--<ConfigFileTransformation configFile="App.config">
        <Transformation>
          <![CDATA[<?xml version="1.0" encoding="utf-8"?>
                            <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
                <connectionStrings>
                  <add name="MyDatabase" connectionString="Data Source=.;Initial Catalog=MyDatabaseForTesting;Integrated Security=True"
                       xdt:Locator="Match(name)" xdt:Transform="SetAttributes(connectionString)" />
                </connectionStrings>
                            </configuration>
                        ]]>
        </Transformation>
      </ConfigFileTransformation>-->

   
</Steps>
 
</DeploymentTransformation>

As you can see, this transformation is applied to you App.config by default:
<ConfigFileTransformation configFile="App.config">



Within the commented section is a line that contains the database connection string:

<connectionStrings>
                 
<add name="MyDatabase" connectionString="Data Source=.;Initial Catalog=MyDatabaseForTesting;Integrated Security=True"
                       
xdt:Locator="Match(name)" xdt:Transform="SetAttributes(connectionString)" />
               
</connectionStrings>

Now you can use the aforementioned placeholders (like {TestThreadId}) within this transformation. So if you set the Initial Catalog to "Database{TestThreadId}", it will transform the database accordingly for each thread, giving you
Database0
Database1
Database2
etc.

This means that each thread will use a different connection string and access a different database. This also means that potential conflicts where one thread interferes with tests in another thread are prevented.

Note that you can also define targets in your profile. So you can define a target for each browser and give it the same name as your key ("Chrome", "IE", "Firefox" etc.). You can then transform your configuration file for each target using the {Target} placeholder, e.g.
 <add key="{Target}" />


Vemula Raju

unread,
Jun 21, 2016, 8:43:56 PM6/21/16
to SpecRun
Thanks a lot for detailed explanation.

<ConfigFileTransformation configFile="App.config"> I understand this configuration (part of the selenium project) is duplicated for number of threads and used for test execution. But this config file is different from the Application-config file which is  deployed on QA server which will have the connection string. This is actual connection string that gets used by the application not the app config part of the selenium project. Selenium Project app config would have the urls to where application is deployed, test data and brwoser settings.


Can you please let me know, in order to run tests using 3 threads, do we need create 3 copies of databases ? Because my database size is around 30 GB. Creating copies of them in all the environments is not possible (QA, UAT)?


Thanks,
Raj
Reply all
Reply to author
Forward
0 new messages