Smoke tests

70 views
Skip to first unread message

Tomáš Ehrlich

unread,
Jan 24, 2015, 10:22:04 AM1/24/15
to django...@googlegroups.com
Hello,
last few weeks I’ve been thinking about implementing smoke tests into my deployment process. Last week I wrote simple test runner (https://github.com/djentlemen/django-smoked), but still I’m missing methodology *what* should I test and *how*. Since smoke test has very wide definition for different types of software — https://en.wikipedia.org/wiki/Smoke_testing_(software), my idea is: After every deployment run small subset of tests with *production* settings and just check, that app was deployed successfully. If not, rollback to previous version immediately.


Few such tests might be:
 — check responses of few URL endpoints (like homepage)
 — check database settings are valid (since most tests runs on testing/development database)
 — check cache, email settings, etc (for the same reasons as above)


I wonder how do you test your apps? Do you use some kind of „smoke tests“ described above?


Cheers,
   Tom

Cal Leeming

unread,
Jan 25, 2015, 8:42:11 AM1/25/15
to django...@googlegroups.com
Hi Tom,

Personally I'm not convinced by the concept of smoke tests in
production, if you have a proper development workflow, and your build
works in dev, then you should be confident that your build will work
in prod. Testing URL endpoints in prod should be part of your devops
testing, and kept completely separate to your application. Testing
database settings, cache, email etc should be part of your
bootstrapping, and the application should report a failure via a
reliable tracking system (NR, Sentry etc) of such problems. This is
not smoke testing, it's just good practise. Deployment testing is not
smoke testing, it's deployment testing. The concept of testing your
application in production is an anti-pattern, but the concept of
testing your deployment in production is a necessity, and the two
should be kept completely separate.

Some people use BDD, which I'm personally not a fan of, and others
will use tools such as Selenium and Mocha to ensure pages are working
correctly. If you have set up your application correctly, then you
will be catching these errors as they happen (e.g. Raven JS).

I don't know why "smoke tests" are suddenly becoming the new buzz phrase....

Anyway, hope this helps a bit

Cal
> --
> You received this message because you are subscribed to the Google Groups
> "Django users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-users...@googlegroups.com.
> To post to this group, send email to django...@googlegroups.com.
> Visit this group at http://groups.google.com/group/django-users.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/django-users/467626FA-A50D-4AF9-991C-5BD05637693F%40gmail.com.
> For more options, visit https://groups.google.com/d/optout.

Derek

unread,
Jan 26, 2015, 3:32:38 AM1/26/15
to django...@googlegroups.com
The same article you refer to says"

"A frequent characteristic of a smoke test is that it runs quickly, often in the order of a few minutes and thus provides much quicker feedback and faster turnaround than the running of full test suites which can take hours or even days."

I would think that if your current unit & functional tests run completely within the order of minutes, then adding smoke tests as well may be redundant.  There are also tools to help speed up those tests, which might be worth investigating before adding another test layer. 

Tomáš Ehrlich

unread,
Jan 26, 2015, 12:41:01 PM1/26/15
to django...@googlegroups.com, c...@iops.io
Hello Cal,
thank you for your answer.

I'm deploying my project to AWS, so I can't replicate whole production environment easily. I do deployment to "release" environment which is about the same as production one (hosted on AWS, created with the same scripts as production one), but still there are some tests I need to run in this environment.


For example: check that email is set up correctly. I need a simple test which sends an email. It doesn't matter whether I catch the possible exception in test or in Sentry. I just want to know if it's working quickly.

Checking URL endpoint is a different problem. I can't simply check URL (as I do it now in django-smoked btw), because my app could be behind load-balancer where endpoints are updated gradually. The app itself have to be able to report that it works and can be plugged into load balancer. Right now I'm trying to figure out how to send few requests to wsgi app directly.

Testing deployment (whole load-balancer -> nginx -> docker stack) is a completely problem and I don't want to solve it with these tests.

I understand that testing app in production is anti-pattern, but after every release I open browser and try few links. I would like to do it programmatically because when I create a bunch of nodes behind load-balancer, I'm screwed with this approach...


And yes, it is probably a buzz word :) I ignored it for a while but when I heard about it from third different source, I started to digging into it...

Cheers,
   Tom

UTC+1 Cal Leeming napsal(a):

Tomáš Ehrlich

unread,
Jan 26, 2015, 12:45:25 PM1/26/15
to django...@googlegroups.com
Hi Derek,
the speed of tests isn't a problem I'm trying to solve. The problem is, the tests run in different environment (obviously), but I would like to run a subset of my tests in production environment. This subset should include only "safe" tests, which doesn't create objects in DB for example.

Cheers,
   Tom


Dne pondělí 26. ledna 2015 9:32:38 UTC+1 Derek napsal(a):

Cal Leeming

unread,
Jan 26, 2015, 1:05:52 PM1/26/15
to Tomáš Ehrlich, django...@googlegroups.com, Cal Leeming
In-line comments below, but imho I'd recommend re-thinking your
workflow as it doesn't make sense to me.

On Mon, Jan 26, 2015 at 5:41 PM, Tomáš Ehrlich <tomas....@gmail.com> wrote:
> Hello Cal,
> thank you for your answer.
>
> I'm deploying my project to AWS, so I can't replicate whole production
> environment easily. I do deployment to "release" environment which is about
> the same as production one (hosted on AWS, created with the same scripts as
> production one), but still there are some tests I need to run in this
> environment.

I don't understand what you mean by "release" environment, this isn't
an obvious naming convention (at least in the context which you've
explained).

>
>
> For example: check that email is set up correctly. I need a simple test
> which sends an email. It doesn't matter whether I catch the possible
> exception in test or in Sentry. I just want to know if it's working quickly.

This would be part of your bootstrapping and/or deployment testing.
These can be captured by using Sentry whilst inside wsgi.py, or a log
collection/aggregation service e.g. logentries in the event that your
wsgi.py also becomes unloadable. uWSGI also has some error handling
hooks in the event that a WSGI entry point fails to work iirc.

>
> Checking URL endpoint is a different problem. I can't simply check URL (as I
> do it now in django-smoked btw), because my app could be behind
> load-balancer where endpoints are updated gradually. The app itself have to
> be able to report that it works and can be plugged into load balancer. Right
> now I'm trying to figure out how to send few requests to wsgi app directly.
>
> Testing deployment (whole load-balancer -> nginx -> docker stack) is a
> completely problem and I don't want to solve it with these tests.
>
> I understand that testing app in production is anti-pattern, but after every
> release I open browser and try few links. I would like to do it
> programmatically because when I create a bunch of nodes behind
> load-balancer, I'm screwed with this approach...

For this you would use something like Selenium or Mocha, but again if
you're releasing into prod then these tests would have already been
ran as part of your CI workflow and would be redundant.
Reply all
Reply to author
Forward
0 new messages