Using a bitly URL to authenticate a user account

133 views
Skip to first unread message

Jarrett Prosser

unread,
Dec 22, 2016, 7:25:47 AM12/22/16
to Bitly API
Hey peeps,

We're developing an application that requires users to authenticate their email address. We've implemented a method which generates a URL, which is emailed to the user. When the user follows the link, the HTTP request is registered with our server and we authenticate their address.

We've been trying to use bitly to shorten these links but we are finding that the bitly API is making an HTTP request when it generates the shortened link.

Is this part of the normal functionality? Is there some way we can disable this?

Thanks!
--


Commercial In Confidence: This email and any files transmitted with it are
confidential and contain privileged or copyright information. You must not
present this message to another party without first gaining permission from
the sender. If you are not the intended recipient you must not copy,
distribute or use this email or the information contained in it for any
purpose other than to notify us. If you have received this message in
error, please notify the sender immediately and delete this email from your
system. We do not guarantee that this material is free from viruses or any
other defects although due care has been taken to minimise the risk. The
opinions expressed in this email are those of the individual, and do not
necessarily represent those of Rollercoaster Digital Pty Ltd. Please
consider the environment before printing this email.

Peter Herndon

unread,
Dec 22, 2016, 9:29:49 AM12/22/16
to Bitly API
Hi Jarrett,

Depending on how you go about shortening a URL, Bitly will crawl your URL to find various data about the page to display in our web UI. You can disallow our crawler by using an industry-standard robots.txt file (http://www.robotstxt.org) at the root of your site. Our crawler identifies as "bitlybot", so if you wanted to block our crawler specifically you could set your robots.txt file to have the following:

User-Agent: bitlybot
Disallow: /

If you wanted to block all crawlers that respect robots.txt, you could instead have the following:

User-Agent: *
Disallow: /

Our crawler caches a site's robots.txt file for 24 hours, so it may take a day or so for our crawler to look for your new or edited robots.txt and take it into account. That said, there are crawlers out there that do NOT respect robots.txt, and some folks who do crawl Bitlinks, which are not hard to discover by random walk from a known starting point. A better strategy is to architect your web application so that HTTP GETs don't trigger one-time effects. Instead, you should ideally be using POST method for those kind of tasks. But that's more ideal than real. Start with robots.txt and see if that takes care of the issue. I suspect you'll find that various crawlers will hit those URLs at some point anyway, though likely not before they're triggered by the intended user.

Regards,

---Peter

Peter Herndon
Sr. Application Engineer
@Bitly

Jarrett Prosser

unread,
Jan 3, 2017, 12:02:29 PM1/3/17
to Bitly API
Hi Peter, 

Great, thanks for the clear explanation. I'll give this a go and let you know how it turns out.

Jarrett
Reply all
Reply to author
Forward
0 new messages