Running the Cloud Formation demo

312 views
Skip to first unread message

Alastair James

unread,
May 12, 2013, 5:29:59 AM5/12/13
to druid-de...@googlegroups.com
Hi there, I am trying to get the wikipedia edit demo to run.


(Note that the instructions only work if US1 is your default region, the script does not run if (say) EU is selected, you may want to point that out).

It installs and I can get to the demoServlet page, however then what do I do? I have tried performing some queries, however I keep getting no results ([] over the wire). Not sure if I am entering the wrong data, or the data has not loaded.

So questions:

1) What would an example valid query look like? So I can validate data is loaded.
2) How can I tell if data has loaded? Is there an admin panel? I  tried SSHing into the master node, but have no idea what I am looking for.

Cheers

Alastair

Eric Tschetter

unread,
May 13, 2013, 11:49:27 AM5/13/13
to druid-de...@googlegroups.com
Alastair,

Allow me to apologize, it looks like that was released a little pre-mature.  I fired it up and also cannot get a query to return a value from the UI.  I've updated the wiki page there to indicate that the setup currently does not work and we will get working on fixing it up.

In the meantime, you can check out the realtime examples to get a feel for what it is like interacting with the system (the example runs by curl'ing a query object up to the server).  


You can also check out https://github.com/housejester/druid-test-harness for some scripts that simplify firing up all of the various processes and pushing some dummy data in.  The test-harness was built up around an older version of the code, so I'm not sure if it will work out of the box with the current version, but it's a starting point.

--Eric



Alastair

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msg/druid-development/-/s0r6-iEujV4J.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Harpreet Singh

unread,
May 13, 2013, 2:54:19 PM5/13/13
to druid-de...@googlegroups.com
Hello Eric & Alastair,

My apologies. The access key for fetching S3 information got changed while the same did not get refreshed in runtime.properties.

Also for the other question of Alastair: S3 bucket is configured for US Region 1 and hence template only supported the region in which S3 bucket is configured.

I will fix the above and revert on same.

Regards

Harpreet

Steven Harris

unread,
May 13, 2013, 3:04:10 PM5/13/13
to druid-de...@googlegroups.com, druid-de...@googlegroups.com
Are there some tests we can add to protect against this moving forward? 

Cheers,
Steve
To view this discussion on the web visit https://groups.google.com/d/msg/druid-development/-/-NzOWKBQM6IJ.

Alastair James

unread,
May 13, 2013, 5:56:42 PM5/13/13
to druid-de...@googlegroups.com
Hi there.

Thanks for the replies. I think we may wait until things are a bit more documented and well defined before reassessing the platform. I dont mean to criticise (I looks like a great product), but the documentation is pretty lacking. I cant really get an idea of how things work and how to manage a cluster.

The like you supplied (https://github.com/metamx/druid/wiki/Realtime-Examples) for example: All the links seem to be 404ing. 

As a note on the CloudFront script: If you want to make it multi region compatible, you will also need to change IP for the EC2 node meta-data service, as, annoyingly, that changes per region.

Cheers

Al


--
You received this message because you are subscribed to a topic in the Google Groups "Druid Development" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/druid-development/MdlPPmCMqGE/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to druid-developm...@googlegroups.com.

To post to this group, send email to druid-de...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 



--
Dr Alastair James
CTO Ometria.com
Skype: al.james

Eric Tschetter

unread,
May 13, 2013, 6:18:21 PM5/13/13
to druid-de...@googlegroups.com
Thanks for the replies. I think we may wait until things are a bit more documented and well defined before reassessing the platform. I dont mean to criticise (I looks like a great product), but the documentation is pretty lacking. I cant really get an idea of how things work and how to manage a cluster.

Completely understand and thanks for the feedback.  We are aware that the documentation is lacking and are working on improving documentation and the initial out-of-the-box experience.  These goals fight with feature development, though, so try coming back in another 2 or 3 months and hopefully we will have a tighter ship.
 

The like you supplied (https://github.com/metamx/druid/wiki/Realtime-Examples) for example: All the links seem to be 404ing. 

Thanks for letting me know.  We've been updating things in this area a lot and some of the links went stale.  I've fixed them.

 
As a note on the CloudFront script: If you want to make it multi region compatible, you will also need to change IP for the EC2 node meta-data service, as, annoyingly, that changes per region.

Thanks for the note, will keep that in mind as we fix it.

--Eric

Eric Tschetter

unread,
May 13, 2013, 7:04:13 PM5/13/13
to druid-de...@googlegroups.com
Harpreet,

Is the CloudFormation setup configured to pull down private data so that it requires meaningful credentials to get at the data?

When I fired up my cluster, it was actually downloading data on the computes, but the UI wasn't actually firing off queries from what I could tell.

--Eric 


To view this discussion on the web visit https://groups.google.com/d/msg/druid-development/-/-NzOWKBQM6IJ.

Harpreet Singh

unread,
May 14, 2013, 1:15:28 AM5/14/13
to druid-de...@googlegroups.com
Hello Eric,

CloudFormation AMI is using the runtime.properties for access keys for S3. Although data is preloaded into MySQL, but validation inside druid api fails if S3 credentials are incorrect. UI is loading fine but when the fetch request is sent, it fails as S3 credentials are not valid. I looked into logs and we had encountered similar issue last time access key was changed (since AMI after creation maintains same set of properties). 

Way I fixed it last time was to - recreate the AMI and update template. But this will crash again next time access key are changed. I can create runtime configuration object and take properties from rest api from central location on server startup.

Regards

Harpreet

Alastair James

unread,
May 14, 2013, 3:44:19 AM5/14/13
to druid-development
Thanks everyone. I look forward to looking again soon.

As a note for the S3 access keys issue, might I suggest setting up a separate IAM identity which just has access to the bucket and prefix (directory) for the required files. Then that IAM's keys can be different from your main ones will never change unless you do so manually.

Al


You received this message because you are subscribed to a topic in the Google Groups "Druid Development" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/druid-development/MdlPPmCMqGE/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to druid-developm...@googlegroups.com.

To post to this group, send email to druid-de...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Eric Tschetter

unread,
May 14, 2013, 10:11:04 AM5/14/13
to druid-de...@googlegroups.com
Hrm, that data is intended to just be public.  I added a bucket policy now to make it so that anybody should be able to read the objects.  

Harpreet, can you verify that the segment load issue is fixed?

--Eric

Harpreet Singh

unread,
May 22, 2013, 9:05:16 AM5/22/13
to druid-de...@googlegroups.com
Hello Eric,

I ran the test today and created Stack against the template. I dont see the data coming. We have old properties in runtime.properties related to access and secret key. I tried with same on 3Hub and that too could not connect. 

Below is the section of master & compute log (its still breaking due to access of S3):

Master

2013-05-22 12:41:41,515 INFO [Master-ZKYP--0] com.metamx.druid.master.LoadQueuePeon - Server[/druid/loadQueuePath/ec2-184-73-68-28.compute-1.amazonaws.com]'s currently loading entry[wikipedia_editstream_2011-10-04T00:00:00.000Z_2011-10-05T00:00:00.000Z_1970-01-01T00:00:00.001Z] appeared.
2013-05-22 12:41:41,516 INFO [Master-Exec--0] com.metamx.druid.master.LoadQueuePeon - Asking server peon[/druid/loadQueuePath/ec2-184-73-68-28.compute-1.amazonaws.com] to load segment[DataSegment{size=18072482, shardSpec=NoneShardSpec, metrics=[edits, delta], dimensions=[anonymous, unpatrolled, geo, page, newPage, language, user, namespace], version='1970-01-01T00:00:00.001Z', loadSpec={type=s3_zip, bucket=druidpublicdata, key=wikipedia-editstream/segments/2011-10-01T00:00:00.000Z_2011-10-02T00:00:00.000Z/1970-01-01T00:00:00.001Z/0/index.zip}, interval=2011-10-01T00:00:00.000Z/2011-10-02T00:00:00.000Z, dataSource='wikipedia_editstream', binaryVersion='null'}]
2013-05-22 12:41:41,516 INFO [Master-Exec--0] com.metamx.druid.master.LoadQueuePeon - Server[/druid/loadQueuePath/ec2-184-73-68-28.compute-1.amazonaws.com] skipping doNext() because something is currently loading[SegmentChangeRequestLoad{segment=DataSegment{size=18858973, shardSpec=NoneShardSpec, metrics=[edits, delta], dimensions=[anonymous, unpatrolled, geo, page, newPage, language, user, namespace], version='1970-01-01T00:00:00.001Z', loadSpec={type=s3_zip, bucket=druidpublicdata, key=wikipedia-editstream/segments/2011-10-04T00:00:00.000Z_2011-10-05T00:00:00.000Z/1970-01-01T00:00:00.001Z/0/index.zip}, interval=2011-10-04T00:00:00.000Z/2011-10-05T00:00:00.000Z, dataSource='wikipedia_editstream', binaryVersion='null'}}].
2013-05-22 12:41:41,516 INFO [Master-Exec--0] com.metamx.druid.master.LoadQueuePeon - Asking server peon[/druid/loadQueuePath/ec2-184-72-179-98.compute-1.amazonaws.com] to load segment[DataSegment{size=18072482, shardSpec=NoneShardSpec, metrics=[edits, delta], dimensions=[anonymous, unpatrolled, geo, page, newPage, language, user, namespace], version='1970-01-01T00:00:00.001Z', loadSpec={type=s3_zip, bucket=druidpublicdata, key=wikipedia-editstream/segments/2011-10-01T00:00:00.000Z_2011-10-02T00:00:00.000Z/1970-01-01T00:00:00.001Z/0/index.zip}, interval=2011-10-01T00:00:00.000Z/2011-10-02T00:00:00.000Z, dataSource='wikipedia_editstream', binaryVersion='null'}]
2013-05-22 12:41:41,516 INFO [Master-Exec--0] com.metamx.druid.master.LoadQueuePeon - Server[/druid/loadQueuePath/ec2-184-72-179-98.compute-1.amazonaws.com] skipping doNext() because something is currently loading[SegmentChangeRequestLoad{segment=DataSegment{size=16524989, shardSpec=NoneShardSpec, metrics=[edits, delta], dimensions=[anonymous, unpatrolled, geo, page, newPage, language, user, namespace], version='1970-01-01T00:00:00.001Z', loadSpec={type=s3_zip, bucket=druidpublicdata, key=wikipedia-editstream/segments/2011-10-05T00:00:00.000Z_2011-10-06T00:00:00.000Z/1970-01-01T00:00:00.001Z/0/index.zip}, interval=2011-10-05T00:00:00.000Z/2011-10-06T00:00:00.000Z, dataSource='wikipedia_editstream', binaryVersion='null'}}].
2013-05-22 12:41:41,517 INFO [Master-Exec--0] com.metamx.druid.master.DruidMasterBalancer - No segments found.  Cannot balance.
2013-05-22 12:41:41,517 INFO [Master-Exec--0] com.metamx.druid.master.DruidMasterLogger - [_default_tier] : Assigned 10 segments among 3 servers
2013-05-22 12:41:41,517 ERROR [Master-Exec--0] com.metamx.druid.master.DruidMaster - Caught exception, ignoring so that schedule keeps going.: {class=com.metamx.druid.master.DruidMaster, exceptionType=class java.lang.NullPointerException, exceptionMessage=null}
java.lang.NullPointerException
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
at com.google.common.collect.Maps$TransformedEntriesMap.<init>(Maps.java:1165)
at com.google.common.collect.Maps.transformEntries(Maps.java:1064)
at com.metamx.druid.master.DruidMasterLogger.run(DruidMasterLogger.java:97)
at com.metamx.druid.master.DruidMaster$MasterRunnable.run(DruidMaster.java:617)
at com.metamx.druid.master.DruidMaster$3.call(DruidMaster.java:459)
at com.metamx.druid.master.DruidMaster$3.call(DruidMaster.java:452)
at com.metamx.common.concurrent.ScheduledExecutors$2.run(ScheduledExecutors.java:99)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

Compute

2013-05-22 12:58:42,031 ERROR [PhoneBook--0] com.metamx.druid.coordination.ZkCoordinator - Failed to load segment[DataSegment{size=18072482, shardSpec=NoneShardSpec, metrics=[edits, delta], dimensions=[anonymous, unpatrolled, geo, page, newPage, language, user, namespace], version='1970-01-01T00:00:00.001Z', loadSpec={type=s3_zip, bucket=druidpublicdata, key=wikipedia-editstream/segments/2011-10-01T00:00:00.000Z_2011-10-02T00:00:00.000Z/1970-01-01T00:00:00.001Z/0/index.zip}, interval=2011-10-01T00:00:00.000Z/2011-10-02T00:00:00.000Z, dataSource='wikipedia_editstream', binaryVersion='null'}]
com.metamx.druid.loading.SegmentLoadingException: S3 fail!  Key[s3://druidpublicdata/wikipedia-editstream/segments/2011-10-01T00:00:00.000Z_2011-10-02T00:00:00.000Z/1970-01-01T00:00:00.001Z/0/index.zip]
at com.metamx.druid.loading.S3DataSegmentPuller.isObjectInBucket(S3DataSegmentPuller.java:133)
at com.metamx.druid.loading.S3DataSegmentPuller.getSegmentFiles(S3DataSegmentPuller.java:71)
at com.metamx.druid.loading.SingleSegmentLoader.getSegmentFiles(SingleSegmentLoader.java:123)
at com.metamx.druid.loading.SingleSegmentLoader.getSegment(SingleSegmentLoader.java:61)
at com.metamx.druid.loading.DelegatingSegmentLoader.getSegment(DelegatingSegmentLoader.java:49)
at com.metamx.druid.coordination.ServerManager.loadSegment(ServerManager.java:111)
at com.metamx.druid.coordination.ZkCoordinator.addSegment(ZkCoordinator.java:240)
at com.metamx.druid.coordination.SegmentChangeRequestLoad.go(SegmentChangeRequestLoad.java:47)
at com.metamx.druid.coordination.ZkCoordinator$1.newEntry(ZkCoordinator.java:144)
at com.metamx.druid.coordination.ZkCoordinator$1.newEntry(ZkCoordinator.java:131)
at com.metamx.druid.client.ZKPhoneBook$InternalPhoneBook$UpdatingRunnable.run(ZKPhoneBook.java:388)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.jets3t.service.ServiceException: Request Error. HEAD '/wikipedia-editstream%2Fsegments%2F2011-10-01T00%3A00%3A00.000Z_2011-10-02T00%3A00%3A00.000Z%2F1970-01-01T00%3A00%3A00.001Z%2F0%2Findex.zip' on Host 'druidpublicdata.s3.amazonaws.com' @ 'Wed, 22 May 2013 12:58:42 GMT' -- ResponseCode: 403, ResponseStatus: Forbidden, RequestId: 16E1C068FE33018E, HostId: 75JkllcVrLaBwAa28SlRnV0UJjNfjEttT1vzKHqmM8wVqYLQeXsSw0QXdC3BbiYS
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:527)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestHead(RestStorageService.java:874)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:1950)
at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectDetailsImpl(RestStorageService.java:1877)
at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:1095)
at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:550)
at org.jets3t.service.StorageService.isObjectInBucket(StorageService.java:487)
at com.metamx.druid.loading.S3DataSegmentPuller.isObjectInBucket(S3DataSegmentPuller.java:130)
... 13 more
Caused by: org.jets3t.service.impl.rest.HttpException
at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:525)
... 20 more
2013-05-22 12:58:42,035 INFO [PhoneBook--0] com.metamx.druid.coordination.ZkCoordinator - Completed processing for 

Not sure but seems like we need to change runtime properties for access key and security password.

Regards

Harpreet 

Fangjin Yang

unread,
May 22, 2013, 11:58:07 AM5/22/13
to druid-de...@googlegroups.com
Hi Harpreet,

What version of Druid are you running? We have not changed any configuration around validation. What most likely needs to happen is that the data that the compute node needs to pull down should be in a publicly readable S3 bucket. Are you sure you have the correct permissions for your bucket?

Thanks,
FJ


Harpreet Singh

unread,
May 23, 2013, 3:01:37 PM5/23/13
to druid-de...@googlegroups.com
Hello FJ,

I am using 0.3.27 release. Druid runtime properties contain old access key and secret key. Since the bucket is now made public by Eric, should those properties still exist or do I need to make them blank?

Not sure if validation using access key and secret key is done for public S3 buckets.

Regards

Harpreet

Fangjin Yang

unread,
May 23, 2013, 4:02:33 PM5/23/13
to druid-de...@googlegroups.com
Hi Harpreet, I'm not entirely aware of the changes Eric made. If the bucket is public you should be able to access it without needing your previous authentication credentials. Can you point me to the bucket? I can try and take a look. 


Alastair James

unread,
May 23, 2013, 4:22:46 PM5/23/13
to druid-development
Hi there. I am still subscribed to this thread.

I can confirm that if you supply credentials they need to be valid, even if the data is public. So you should either use valid credentials or no credentials.

Al



For more options, visit https://groups.google.com/groups/opt_out.
 
 

Harpreet Singh

unread,
May 27, 2013, 2:43:12 PM5/27/13
to druid-de...@googlegroups.com
Hello Alastair,

Thanks for your comments. I verified with manually changing settings (by ssh into compute & master machine) and it seems to have fixed the issue on restart.

Once I have access to environment, I will fix the issue on AWS Cloudformation by pushing new AMI and updating the template file. 

Will update group once I have access from team and done the fix.

Thanks for the patience and suggestions.

Regards

Harpreet


Alastair James

unread,
May 28, 2013, 4:28:17 AM5/28/13
to druid-development
Thats great. If you could let me know when you have it up and running, that would be great.

Also, any chance of adding some example queries to the readme page so we can know what we need to do to get a response.

Al



For more options, visit https://groups.google.com/groups/opt_out.
 
 

Fangjin Yang

unread,
May 28, 2013, 1:06:07 PM5/28/13
to druid-de...@googlegroups.com
Hi Alastair, if you are interested, I've written up a short tutorial on ingesting and querying Druid via our real-time node.


It may be something to get started with if you are interested.

Thanks!
FJ


Alastair James

unread,
May 28, 2013, 3:16:29 PM5/28/13
to druid-development
Thats great, thanks.

I have a space planned to spend some more time analysing a new analytics platform in a week or two, so I will try this then.

Many thanks

Al 



For more options, visit https://groups.google.com/groups/opt_out.
 
 

we...@dancingtrout.net

unread,
May 29, 2013, 12:27:16 AM5/29/13
to druid-de...@googlegroups.com
Followed your instructions, but didn't get very far:
2013-05-29 04:21:31,476 WARN [96853459@qtp-593447551-8] com.metamx.druid.http.QueryServlet - Exception occurred on request []
com.fasterxml.jackson.databind.JsonMappingException: No content to map due to end-of-input
 at [Source: [B@7fad5275; line: 1, column: 1]
    at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:164)
    at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:2836)
    at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:2778)
    at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2012)
    at com.metamx.druid.http.QueryServlet.doPost(QueryServlet.java:92)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
    at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
    at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
    at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
    at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
    at org.mortbay.jetty.Server.handle(Server.java:326)
    at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
    at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
    at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
    at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
    at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
    at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Contents of file:
{

  "queryType"  : "timeBoundary",
  "dataSource" : "twitterstream"
}

About 50K records had been ingested, and persisted to disk.

//w

Fangjin Yang

unread,
May 29, 2013, 12:41:14 AM5/29/13
to druid-de...@googlegroups.com
Hi,

It looks like an empty query was sent.

Does ./run_example_client.sh generate any results?

I just tried it on my local machine:

ip-192-168-1-107:druid-services-0.4.6-SNAPSHOT fangjin$ curl -X POST "http://localhost:8080/druid/v2/?pretty" -H "Content-type: application/json"  -d @time_boundary_query.body
[ {
  "timestamp" : "2013-05-29T04:38:00.000Z",
  "result" : {
    "minTime" : "2013-05-29T04:38:00.000Z",
    "maxTime" : "2013-05-29T04:40:00.000Z"
  }
} ]ip-192-168-1-107:druid-services-0.4.6-SNAPSHOT fangjin$


Alastair James

unread,
May 29, 2013, 3:49:54 AM5/29/13
to druid-development
Note there appears to be a problem with the final query JSON blob.

Filter is an object inside an object, which is not valid JSON. Should be:


{
    "queryType": "groupBy",
    "dataSource": "twitterstream",
    "granularity": "all",
    "dimensions": ["htags"],
    "limitSpec": {"type":"default", "columns":[{"dimension": "tweets", "direction":"DESCENDING"}], "limit":5},
    "aggregations":[
      { "type": "longSum", "fieldName": "tweets", "name": "tweets"}
    ],
    "filter": {"type": "selector", "dimension": "lang", "value": "en" },
    "intervals":["2012-10-01T00:00/2020-01-01T00"]
}



For more options, visit https://groups.google.com/groups/opt_out.
 
 

Fangjin Yang

unread,
May 29, 2013, 11:42:18 AM5/29/13
to druid-de...@googlegroups.com
Oops, that was a type on my part! Thanks!


Fangjin Yang

unread,
May 29, 2013, 11:42:26 AM5/29/13
to druid-de...@googlegroups.com
Typo :P
Reply all
Reply to author
Forward
0 new messages