--- no new SOAP Search API keys anymore
--- the developer's kit has been deleted
>From the Google SOAP Search API (Beta) home page at
two items have been deleted from the menu:
You can look at the original version of the download-page:
Frequently Asked Questions
6. How do I get a license key?
We are no longer issuing additional license keys for
the SOAP Search API, and encourage you to use the
AJAX Search API instead. For developers who are already
using the SOAP Search API, we've kept the documentation
live on this site.
Answers about the developer's kit can still be found in the
FAQs, but the code is no longer available.
For the purposes of this post I tried bringing up the page
http://www.google.com/search?q=google+ajax&output=xml and it gives a
polite but firm 'not allowed' message. Since it didn't just ignore the
output=xml as it does if one requests
http://www.google.com/search?q=google+ajax&output=json I'm assuming
some people have access to the xml formatted result set. Can the xml
result set be made public?
Slightly off topic - are there any plans to make AdSense respond to
changes in page content as a result of AJAX?
you will see
1.3 Appropriate Conduct and Prohibited Uses.
... You agree that you will not, and you will not permit your users or
other third parties to: (a) modify or replace the text, images, or
other content of the Google Search Results, including by (i) changing
the order in which the Google Search Results appear, (ii) intermixing
Search Results from sources other than Google, or (iii) intermixing
other content such that it appears to be part of the Google Search
This means the concept for the AJAX Search API is _very_ different
from the SOAP Search API, not even similar.
What if queries coming in over SOAP were limited to the domain for
which the key was issued?
That is, implicitly add 'site:...' to the request.
This would seem to side-step a large proportion of mischief while
allowing Google (and myself) to gather some valuable information as to
the search terms of interest to the visitors of a specific site.
> This would seem to side-step a large proportion of mischief while
> allowing Google (and myself) to gather some valuable information as to
> the search terms of interest to the visitors of a specific site.
I would be rather skeptical to add further restrictions to the API,
most likely this will solve one problem and introduce five others.
To give you an example: the huge demand for keyword
specific ranking information has led to heavy use for the
SOAP API because there is no other legitimate way to
get this information. No surprise, this brought up
demands for more than 1000 queries per day or for
results to mirror the ones from the browser ...
A better solution for this would be to add functionality
to the place where it belongs, the "Google webmaster tools".
This is exactly what I'm using the API key for, and had plans to give
it away for free on my site, supported by Google adsense...
Any way to find out where I rank for searches short of manually
> Any way to find out where I rank for searches short of manually
As I said, I don't know any other legitimate way yet. A different
but related statistic is available from Webmaster Tools - Query
stats which you most probably already know.
> The key given to you by Google is not bound by domain but by account
I had forgotten this, thanks. But a key-domain pair restriction is at
> (From Terms and Conditions):
> The Google SOAP Search API service is made available to you for
> your personal, non-commercial use only (at home or at work).
Fair enough. For me, this is implied by the service being in 'beta'
> I would be rather skeptical to add further restrictions to the API,
> most likely this will solve one problem and introduce five others.
Perhaps, or perhaps not. At least existing applications would not
necessarily be affected.
> To give you an example: the huge demand for keyword
> specific ranking information has led to heavy use for the
> SOAP API because there is no other legitimate way to
> get this information. No surprise, this brought up
> demands for more than 1000 queries per day or for
> results to mirror the ones from the browser ...
This is precisely what I would expect to mitigate through a
No data useful to determining page rank would be returned in the
context a 'site:' filter, so you may even have a decrease in these
I am fairly certain that that a 'site:' limitation represents a
positive middle ground for the future of this service.
Is there downside that I am missing?