help on install zipkin

373 views
Skip to first unread message

Fei Fan

unread,
Apr 22, 2013, 11:34:06 PM4/22/13
to zipkin-user
I followed the instructions here http://twitter.github.io/zipkin/install.html
and got stuck in the Zipkin servers section. I'm able to execute bin/
sbt update package-dist command. But after it finishes, no "dist"
directory is created, hence there's no zipkin*.zip for me to copy.

At first, the build failes due to failing in tests. After I installed
redis, git etc,build finally succeeds. But I don't see the dist dir
being generated. Some exceptions occurred during build, HELP!
......
Above all look fine
[info] + ValidLatestValueFilter should
[info] + fail if latest value is negative
[info] + fail if latest value is zero
[info] + pass if latest value is positive
[info] + SufficientDataFilter should
[info] + pass if has enough data
[info] + OutlierFilter should
[info] + pass if enough outliers have been encountered
[info] + CooldownFilter should
[info] + only pass if enough time has passed since last change
[info] + QueryExtractor should
[info] + require
[info] + serviceName
[info] + parse params
[info] + have defaults for
[info] + endDateTime
[info] + limit
[info] + parse spanName special cases
[info] + all
[info] +
[info] + valid
[info] + parse annotations
[info] + parse key value annotations
[info] + JsonAdapter should
[info] + convert binary annotations
[info] + bool
[info] + short
[info] + int
[info] + long
[info] + double
[info] + string
[info] + Sample should
[info] + keep 10% of traces
[info] + drop all traces
[info] + keep all traces
[info] Building zipkin-scrooge-c7b50693.zip from 43 files.
[info] + Jerkson should
[info] + serialize
[info] + span with no annotations
[info] Generating API documentation for main sources...
[info] Passed: : Total 56, Failed 0, Errors 0, Passed 56, Skipped 0
[info] Writing build properties to: /home/tianjiu.ff/zipkin/zipkin-
collector-service/target/resource_managed/main/com/twitter/zipkin/
build.properties
model contains 6 documentable templates
[info] API documentation generation successful.
[info] + web builders should
[info] o compile
[info] o web-dev4816035990703726299.scala
[info] o web-zk1023975672757778279.scala
[info] + SnappyCodec should
[info] + compress and decompress
2013-4-23 11:31:33 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
2013-4-23 11:31:33 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
2013-4-23 11:31:34 com.twitter.cassie.ClusterRemapper$$anonfun$1$
$anonfun$apply$mcV$sp$2 apply
严重: error mapping ring
com.twitter.finagle.ServiceTimeoutException: exceeded 1.seconds to
cassie while creating a service/connection or reserving a service/
connection from the service/connection pool
at
com.twitter.finagle.builder.ClientBuilder.connectTimeoutFactory(ClientBuilder.scala:
914)
at com.twitter.finagle.builder.ClientBuilder.com$twitter
$finagle$builder$ClientBuilder$
$rawInternalBuildFactory(ClientBuilder.scala:788)
at com.twitter.finagle.builder.ClientBuilder$$anon$4$$anon
$5.<init>(ClientBuilder.scala:807)
at com.twitter.finagle.builder.ClientBuilder$$anon
$4.make(ClientBuilder.scala:806)
at com.twitter.finagle.util.Managed$$anon$3$$anon
$4.<init>(Disposable.scala:48)
at com.twitter.finagle.util.Managed$$anon
$3.make(Disposable.scala:47)
at com.twitter.finagle.util.Managed$$anon$3$$anon
$4.liftedTree1$1(Disposable.scala:51)
at com.twitter.finagle.util.Managed$$anon$3$$anon
$4.<init>(Disposable.scala:50)
at com.twitter.finagle.util.Managed$$anon
$3.make(Disposable.scala:47)
at com.twitter.finagle.util.Managed$$anon$3$$anon
$4.liftedTree1$1(Disposable.scala:51)
at com.twitter.finagle.util.Managed$$anon$3$$anon
$4.<init>(Disposable.scala:50)
at com.twitter.finagle.util.Managed$$anon
$3.make(Disposable.scala:47)
at com.twitter.finagle.util.Managed$$anon$3$$anon
$4.<init>(Disposable.scala:48)
at com.twitter.finagle.util.Managed$$anon
$3.make(Disposable.scala:47)
at com.twitter.finagle.builder.ClientBuilder$$anon
$3.<init>(ClientBuilder.scala:875)
at
com.twitter.finagle.builder.ClientBuilder.build(ClientBuilder.scala:
874)
at
com.twitter.cassie.connection.ClusterClientProvider.<init>(ClusterClientProvider.scala:
101)
at com.twitter.cassie.ClusterRemapper.com$twitter$cassie
$ClusterRemapper$$fetchHosts(ClusterRemapper.scala:92)
at com.twitter.cassie.ClusterRemapper$$anonfun$1.apply$mcV
$sp(ClusterRemapper.scala:61)
at com.twitter.util.JavaTimer$$anon$1.run(Timer.scala:155)
at java.util.TimerThread.mainLoop(Timer.java:534)
at java.util.TimerThread.run(Timer.java:484)
2013-4-23 11:31:34 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
2013-4-23 11:31:34 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
2013-4-23 11:31:34 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
2013-4-23 11:31:35 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
2013-4-23 11:31:35 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
[info] + CassandraStorage should
[info] + getSpansByTraceId
[info] + getSpansByTraceIds
[info] + getSpansByTraceIds should return empty list if no trace
exists
[info] + set time to live on a trace and then get it
[info] Building zipkin-collector-core-c7b50693.zip from 65 files.
[info] + RedisListMapSpec should
[info] + insert an element properly
[info] + insert a few elements properly
[info] + remove an element properly
[info] + remove a few elements properly
[info] + obliterate a key (and check that it is in fact obliterated)
[info] + array map should get timeout properly
[info] + array map should get invalid timeout properly
[info] + array map should timeout properly
[info] + array map should reset timeout properly
2013-4-23 11:31:35 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
2013-4-23 11:31:35 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
[info] + RedisSetMap should
[info] + add an item then get it out
[info] + add many items and then get them out
2013-4-23 11:31:36 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
2013-4-23 11:31:36 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
[info] + BucketedColumnFamily should
[info] + insert and get row
[info] + insert and get row slice
[info] + only get limited number of entries
[info] + roll over buckets correctly
[info] Packaging /home/tianjiu.ff/zipkin/zipkin-collector-service/
target/zipkin-collector-service-1.0.1-SNAPSHOT-javadoc.jar ...
[info] Done packaging.
[info] Passed: : Total 28, Failed 0, Errors 0, Passed 25, Skipped 3
[info] Packaging /home/tianjiu.ff/zipkin/zipkin-collector-service/
target/zipkin-collector-service-1.0.1-SNAPSHOT.jar ...
[info] Done packaging.
2013-4-23 11:31:36 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
2013-4-23 11:31:36 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
[info] + CassandraAggregates should
[info] + retrieval
[info] + getTopAnnotations
[info] + getTopKeyValueAnnotations
[info] + getDependencies
[info] + storage
[info] + storeTopAnnotations
[info] + storeTopKeyValueAnnotations
[info] + storeDependencies
[info] + clobber old entries
[info] Building zipkin-web.zip from 86 files.
2013-4-23 11:31:36 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
[info] + RedisIndex should
[info] + index and get span names
[info] + index and get service names
[info] o index only on annotation in each span with the same value
[info] o getTraceIdsByName
[info] + getTraceIdsByAnnotation
[info] + not index empty service name
[info] + not index empty span name
2013-4-23 11:31:37 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
2013-4-23 11:31:37 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
[info] + CassandraIndex should
[info] + index and get span names
[info] + index and get service names
[info] + index only on annotation in each span with the same value
[info] + getTraceIdsByName
[info] + getTracesDuration
[info] + get no trace durations due to missing data
[info] + getTraceIdsByAnnotation
[info] + not index empty service name
[info] + not index empty span name
[info] No tests to run for zipkin-kafka/test:test
[info] Generating API documentation for main sources...
[info] Passed: : Total 32, Failed 0, Errors 0, Passed 32, Skipped 0
[info] + RedisStorage should
[info] + getTraceById
[info] + getTracesByIds
[info] + getTracesByIds should return empty list if no trace exists
[info] + set time to live on a trace and then get it
[info] + RedisConversions should
[info] + convert from a TraceLog and back
[info] + convert from TimeRange and back
[info] + convert from long and back
[info] + convert from double and back
[info] + convert from string and back
[info] + convert from span and back
[info] Writing build properties to: /home/tianjiu.ff/zipkin/zipkin-
test/target/resource_managed/main/com/twitter/zipkin-test/
build.properties
[info] + RedisHash should
[info] + place a new item and get it out
[info] + place a new item and update it
[info] + place a few items and get them out
[info] + place a few items and remove some
[info] + place an item and incr it
[info] + RedisSortedSetMap should
[info] + put a value in and get it out
[info] + follow the workflow for an index
[info] Building zipkin-kafka-c7b50693.zip from 71 files.
2013-4-23 11:31:42 com.twitter.logging.Logger log
INFO: storeTopAnnotations: mockingbird; List(a, b, c)
model contains 8 documentable templates
[info] API documentation generation successful.
[info] Building zipkin-cassandra-c7b50693.zip from 65 files.
2013-4-23 11:31:42 com.twitter.logging.Logger log
WARNING: Invalid msg: %s
org.apache.thrift.transport.TTransportException
at
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:
132)
at
org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at
org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:
378)
at
org.apache.thrift.protocol.TBinaryProtocol.readI16(TBinaryProtocol.java:
278)
at
org.apache.thrift.protocol.TBinaryProtocol.readFieldBegin(TBinaryProtocol.java:
229)
at com.twitter.zipkin.gen.Span$Immutable$.decode(Span.scala:
65)
at com.twitter.zipkin.gen.Span$.decode(Span.scala:21)
at com.twitter.zipkin.gen.Span$.decode(Span.scala:10)
at com.twitter.scrooge.ThriftStructSerializer
$class.fromInputStream(ThriftStructSerializer.scala:25)
at com.twitter.zipkin.collector.processor.ScribeFilter$$anon
$1.fromInputStream(ScribeFilter.scala:38)
at com.twitter.scrooge.ThriftStructSerializer
$class.fromBytes(ThriftStructSerializer.scala:21)
at com.twitter.zipkin.collector.processor.ScribeFilter$$anon
$1.fromBytes(ScribeFilter.scala:38)
at com.twitter.scrooge.ThriftStructSerializer
$class.fromString(ThriftStructSerializer.scala:33)
at com.twitter.zipkin.collector.processor.ScribeFilter$$anon
$1.fromString(ScribeFilter.scala:38)
at com.twitter.zipkin.collector.processor.ScribeFilter$$anonfun
$apply$1$$anonfun$1.apply(ScribeFilter.scala:47)
at com.twitter.zipkin.collector.processor.ScribeFilter$$anonfun
$apply$1$$anonfun$1.apply(ScribeFilter.scala:47)
at com.twitter.util.Duration$.inMilliseconds(Time.scala:346)
at com.twitter.ostrich.stats.StatsProvider
$class.time(StatsProvider.scala:196)
at
com.twitter.ostrich.stats.StatsCollection.time(StatsCollection.scala:
30)
at com.twitter.zipkin.collector.processor.ScribeFilter$$anonfun
$apply$1.apply(ScribeFilter.scala:46)
at com.twitter.zipkin.collector.processor.ScribeFilter$$anonfun
$apply$1.apply(ScribeFilter.scala:44)
at scala.collection.TraversableLike$$anonfun$map
$1.apply(TraversableLike.scala:194)
at scala.collection.TraversableLike$$anonfun$map
$1.apply(TraversableLike.scala:194)
at scala.collection.LinearSeqOptimized
$class.foreach(LinearSeqOptimized.scala:59)
at scala.collection.immutable.List.foreach(List.scala:45)
at scala.collection.TraversableLike
$class.map(TraversableLike.scala:194)
at scala.collection.immutable.List.map(List.scala:45)
at
com.twitter.zipkin.collector.processor.ScribeFilter.apply(ScribeFilter.scala:
44)
at com.twitter.zipkin.collector.processor.ScribeFilterSpec$
$anonfun$1$$anonfun$apply$8.apply(ScribeFilterSpec.scala:68)
at com.twitter.zipkin.collector.processor.ScribeFilterSpec$
$anonfun$1$$anonfun$apply$8.apply(ScribeFilterSpec.scala:66)
at org.specs.specification.LifeCycle
$class.withCurrent(ExampleLifeCycle.scala:66)
at org.specs.specification.Examples.withCurrent(Examples.scala:
52)
at org.specs.specification.Examples$$anonfun$specifyExample
$1.apply(Examples.scala:114)
at org.specs.specification.Examples$$anonfun$specifyExample
$1.apply(Examples.scala:114)
at org.specs.specification.ExampleExecution$$anonfun$3$$anonfun
$apply$5.apply(ExampleLifeCycle.scala:219)
at scala.Option.getOrElse(Option.scala:108)
at org.specs.specification.LifeCycle
$class.executeExpectations(ExampleLifeCycle.scala:90)
at com.twitter.zipkin.collector.processor.ScribeFilterSpec.org
$specs$mock$JMockerExampleLifeCycle$$super
$executeExpectations(ScribeFilterSpec.scala:26)
at org.specs.mock.JMockerExampleLifeCycle
$class.executeExpectations(JMocker.scala:555)
at
com.twitter.zipkin.collector.processor.ScribeFilterSpec.executeExpectations(ScribeFilterSpec.scala:
26)
at org.specs.specification.LifeCycle$$anonfun
$executeExpectations$1.apply(ExampleLifeCycle.scala:90)
at org.specs.specification.LifeCycle$$anonfun
$executeExpectations$1.apply(ExampleLifeCycle.scala:90)
at scala.Option.map(Option.scala:133)
at org.specs.specification.LifeCycle
$class.executeExpectations(ExampleLifeCycle.scala:90)
at com.twitter.zipkin.collector.processor.ScribeFilterSpec.org
$specs$mock$JMockerExampleLifeCycle$$super
$executeExpectations(ScribeFilterSpec.scala:26)
at org.specs.mock.JMockerExampleLifeCycle
$class.executeExpectations(JMocker.scala:555)
at
com.twitter.zipkin.collector.processor.ScribeFilterSpec.executeExpectations(ScribeFilterSpec.scala:
26)
at org.specs.specification.ExampleContext$$anonfun
$executeExpectations$3$$anonfun$apply$3$$anonfun$apply
$4.apply(ExampleContext.scala:81)
at org.specs.specification.ExampleContext$$anonfun
$executeExpectations$3$$anonfun$apply$3$$anonfun$apply
$4.apply(ExampleContext.scala:81)
at scala.Option.map(Option.scala:133)
at org.specs.specification.ExampleContext$$anonfun
$executeExpectations$3$$anonfun$apply$3.apply(ExampleContext.scala:81)
at org.specs.specification.ExampleContext$$anonfun
$executeExpectations$3$$anonfun$apply$3.apply(ExampleContext.scala:81)
at org.specs.specification.ExampleContext$class.id
$1(ExampleContext.scala:32)
at org.specs.specification.ExampleContext$$anonfun
$1.apply(ExampleContext.scala:33)
at org.specs.specification.ExampleContext$$anonfun
$1.apply(ExampleContext.scala:33)
at org.specs.specification.ExampleContext$$anonfun
$executeExpectations$3.apply(ExampleContext.scala:81)
at org.specs.specification.ExampleContext$$anonfun
$executeExpectations$3.apply(ExampleContext.scala:80)
at scala.Option.map(Option.scala:133)
at org.specs.specification.ExampleContext
$class.executeExpectations(ExampleContext.scala:80)
at
org.specs.specification.Examples.executeExpectations(Examples.scala:
52)
at org.specs.specification.ExampleContext$$anonfun
$executeExpectations$3$$anonfun$apply$3$$anonfun$apply
$4.apply(ExampleContext.scala:81)
at org.specs.specification.ExampleContext$$anonfun
$executeExpectations$3$$anonfun$apply$3$$anonfun$apply
$4.apply(ExampleContext.scala:81)
at scala.Option.map(Option.scala:133)
at org.specs.specification.ExampleContext$$anonfun
$executeExpectations$3$$anonfun$apply$3.apply(ExampleContext.scala:81)
at org.specs.specification.ExampleContext$$anonfun
$executeExpectations$3$$anonfun$apply$3.apply(ExampleContext.scala:81)
at org.specs.specification.ExampleContext$class.id
$1(ExampleContext.scala:32)
at org.specs.specification.ExampleContext$$anonfun
$1.apply(ExampleContext.scala:33)
at org.specs.specification.ExampleContext$$anonfun
$1.apply(ExampleContext.scala:33)
at org.specs.specification.ExampleContext$$anonfun
$executeExpectations$3.apply(ExampleContext.scala:81)
at org.specs.specification.ExampleContext$$anonfun
$executeExpectations$3.apply(ExampleContext.scala:80)
at scala.Option.map(Option.scala:133)
at org.specs.specification.ExampleContext
$class.executeExpectations(ExampleContext.scala:80)
at
org.specs.specification.Examples.executeExpectations(Examples.scala:
52)
at org.specs.specification.ExampleExecution$$anonfun
$3.apply(ExampleLifeCycle.scala:219)
at org.specs.specification.ExampleExecution$$anonfun
$3.apply(ExampleLifeCycle.scala:198)
at org.specs.specification.ExampleExecution$$anonfun
$2.apply(ExampleLifeCycle.scala:181)
at
org.specs.specification.ExampleExecution.execute(ExampleLifeCycle.scala:
252)
at org.specs.specification.SpecificationExecutor$$anonfun
$executeExample$2.apply(SpecificationExecutor.scala:55)
at org.specs.specification.SpecificationExecutor$$anonfun
$executeExample$2.apply(SpecificationExecutor.scala:55)
at scala.Option.map(Option.scala:133)
at org.specs.specification.SpecificationExecutor
$class.executeExample(SpecificationExecutor.scala:55)
at
org.specs.specification.BaseSpecification.executeExample(BaseSpecification.scala:
58)
at
org.specs.specification.BaseSpecification.executeExample(BaseSpecification.scala:
58)
at org.specs.specification.ExampleLifeCycle$$anonfun
$executeExample$1.apply(ExampleLifeCycle.scala:125)
at org.specs.specification.ExampleLifeCycle$$anonfun
$executeExample$1.apply(ExampleLifeCycle.scala:125)
at scala.Option.map(Option.scala:133)
at org.specs.specification.ExampleLifeCycle
$class.executeExample(ExampleLifeCycle.scala:125)
at
org.specs.specification.Examples.executeExample(Examples.scala:52)
at
org.specs.specification.Examples.executeExample(Examples.scala:52)
at org.specs.specification.Examples$$anonfun$executeExamples
$2.apply(Examples.scala:80)
at org.specs.specification.Examples$$anonfun$executeExamples
$2.apply(Examples.scala:80)
at scala.Option.map(Option.scala:133)
at
org.specs.specification.Examples.executeExamples(Examples.scala:80)
at org.specs.specification.ExampleStructure
$class.ownFailures(ExampleStructure.scala:58)
at org.specs.specification.Examples.ownFailures(Examples.scala:
52)
at org.specs.specification.ExampleStructure
$class.failures(ExampleStructure.scala:64)
at org.specs.specification.Examples.failures(Examples.scala:
52)
at org.specs.specification.ExampleStructure$$anonfun$failures
$1.apply(ExampleStructure.scala:64)
at org.specs.specification.ExampleStructure$$anonfun$failures
$1.apply(ExampleStructure.scala:64)
at scala.collection.TraversableLike$$anonfun$flatMap
$1.apply(TraversableLike.scala:200)
at scala.collection.TraversableLike$$anonfun$flatMap
$1.apply(TraversableLike.scala:200)
at scala.collection.LinearSeqOptimized
$class.foreach(LinearSeqOptimized.scala:59)
at scala.collection.immutable.List.foreach(List.scala:45)
at scala.collection.TraversableLike
$class.flatMap(TraversableLike.scala:200)
at scala.collection.immutable.List.flatMap(List.scala:45)
at org.specs.specification.ExampleStructure
$class.failures(ExampleStructure.scala:64)
at org.specs.specification.Examples.failures(Examples.scala:
52)
at org.specs.specification.Examples.failures(Examples.scala:
52)
at org.specs.execute.HasResults
$class.failureAndErrors(HasResults.scala:61)
at
org.specs.specification.Examples.failureAndErrors(Examples.scala:52)
at org.specs.execute.HasResults$class.isOk(HasResults.scala:
69)
at org.specs.specification.Examples.isOk(Examples.scala:52)
at
org.specs.runner.NotifierRunner.reportSystem(NotifierRunner.scala:81)
at org.specs.runner.NotifierRunner$$anonfun
$reportASpecification$3.apply(NotifierRunner.scala:72)
at org.specs.runner.NotifierRunner$$anonfun
$reportASpecification$3.apply(NotifierRunner.scala:68)
at scala.collection.LinearSeqOptimized
$class.foreach(LinearSeqOptimized.scala:59)
at scala.collection.immutable.List.foreach(List.scala:45)
at
org.specs.runner.NotifierRunner.reportASpecification(NotifierRunner.scala:
68)
at org.specs.runner.NotifierRunner.report(NotifierRunner.scala:
58)
at org.specs.runner.NotifierRunner.report(NotifierRunner.scala:
45)
at org.specs.runner.Reporter$class.reportSpecs(Reporter.scala:
195)
at
org.specs.runner.NotifierRunner.reportSpecs(NotifierRunner.scala:45)
at org.specs.runner.TestInterfaceRunner$$anonfun$run
$3.apply(TestInterfaceRunner.scala:72)
at org.specs.runner.TestInterfaceRunner$$anonfun$run
$3.apply(TestInterfaceRunner.scala:72)
at scala.Option.map(Option.scala:133)
at
org.specs.runner.TestInterfaceRunner.run(TestInterfaceRunner.scala:72)
at
org.specs.runner.TestInterfaceRunner.run(TestInterfaceRunner.scala:65)
at sbt.TestRunner.delegateRun(TestFramework.scala:61)
at sbt.TestRunner.run(TestFramework.scala:55)
at sbt.TestRunner.runTest$1(TestFramework.scala:75)
at sbt.TestRunner.run(TestFramework.scala:84)
at sbt.TestFramework$$anonfun$6$$anonfun$apply$8$$anonfun$7$
$anonfun$apply$9.apply(TestFramework.scala:183)
at sbt.TestFramework$$anonfun$6$$anonfun$apply$8$$anonfun$7$
$anonfun$apply$9.apply(TestFramework.scala:183)
at sbt.TestFramework$.sbt$TestFramework$
$withContextLoader(TestFramework.scala:195)
at sbt.TestFramework$$anonfun$6$$anonfun$apply$8$$anonfun
$7.apply(TestFramework.scala:183)
at sbt.TestFramework$$anonfun$6$$anonfun$apply$8$$anonfun
$7.apply(TestFramework.scala:183)
at sbt.Tests$$anonfun$makeParallel$1$$anonfun$apply
$7.apply(Tests.scala:113)
at sbt.Tests$$anonfun$makeParallel$1$$anonfun$apply
$7.apply(Tests.scala:113)
at sbt.std.Transform$$anon$3$$anonfun$apply
$2.apply(System.scala:47)
at sbt.std.Transform$$anon$3$$anonfun$apply
$2.apply(System.scala:47)
at sbt.std.Transform$$anon$5.work(System.scala:67)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply
$1.apply(Execute.scala:221)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply
$1.apply(Execute.scala:221)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
at sbt.Execute.work(Execute.scala:227)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:221)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:221)
at sbt.CompletionService$$anon$1$$anon
$2.call(CompletionService.scala:26)
at java.util.concurrent.FutureTask
$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.Executors
$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask
$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:
1110)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
[info] + ScribeFilter should
[info] + convert gen.LogEntry to Span
[info] + convert gen.LogEntry with endline to Span
[info] + convert serialized thrift to Span
[info] + deal with garbage
[info] Passed: : Total 42, Failed 0, Errors 0, Passed 40, Skipped 2
2013-4-23 11:31:42 com.twitter.logging.Logger log
INFO: storeTopKeyValueAnnotations: mockingbird;List(a, b, c)
[info] Packaging /home/tianjiu.ff/zipkin/zipkin-test/target/zipkin-
test-1.0.1-SNAPSHOT-javadoc.jar ...
[info] Done packaging.
[info] Building zipkin-redis-c7b50693.zip from 47 files.
2013-4-23 11:31:42 com.twitter.logging.Logger log
INFO: storeDependencies: mockingbird; List(service1:10, service2:5)
[info] + ScribeCollectorService should
[info] + add to queue
[info] + push back
[info] + ignore wrong category
[info] + store aggregates
[info] + store top annotations
[info] + store top key value annotations
[info] + store dependencies
[info] Packaging /home/tianjiu.ff/zipkin/zipkin-test/target/zipkin-
test-1.0.1-SNAPSHOT.jar ...
[info] Done packaging.
[info] Passed: : Total 13, Failed 0, Errors 0, Passed 13, Skipped 0
[info] Building zipkin-collector-scribe-c7b50693.zip from 68 files.
2013-4-23 11:31:44 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
[info] + /config should
[info] + validate query configs
[info] + query-dev6936859538929584518.scala
[info] Passed: : Total 3, Failed 0, Errors 0, Passed 3, Skipped 0
[info] Building zipkin-query-service.zip from 77 files.
2013-4-23 11:31:45 com.twitter.cassie.ClusterRemapper$$anonfun$1$
$anonfun$apply$mcV$sp$2 apply
严重: error mapping ring
com.twitter.finagle.FailedFastException
at com.twitter.finagle.NoStacktrace(Unknown Source)
[info] + /config should
[info] + validate collector configs
[info] + collector-dev5294201324269635738.scala
[info] Passed: : Total 3, Failed 0, Errors 0, Passed 3, Skipped 0
[info] Building zipkin-collector-service.zip from 88 files.
2013-4-23 11:31:46 com.twitter.cassie.ClusterRemapper com$twitter
$cassie$ClusterRemapper$$fetchHosts
信息: Mapping cluster...
2013-4-23 11:31:46 com.twitter.cassie.ClusterRemapper$$anonfun$1$
$anonfun$apply$mcV$sp$2 apply
严重: error mapping ring
com.twitter.finagle.FailedFastException
at com.twitter.finagle.NoStacktrace(Unknown Source)
INF [20130423-11:31:47.421] stats: Starting LatchedStatsListener
INF [20130423-11:31:47.434] cassie: Mapping cluster...
INF [20130423-11:31:47.434] cassie: Mapping cluster...
700 [20130423-11:31:47.441] net: HttpServer created http
0.0.0.0/0.0.0.0:9900
700 [20130423-11:31:47.454] net: context created: /
700 [20130423-11:31:47.455] net: context created: /report/
700 [20130423-11:31:47.456] net: context created: /favicon.ico
700 [20130423-11:31:47.458] net: context created: /static
700 [20130423-11:31:47.461] net: context created: /pprof/heap
700 [20130423-11:31:47.465] net: context created: /pprof/profile
700 [20130423-11:31:47.466] net: context created: /pprof/contention
700 [20130423-11:31:47.467] net: context created: /tracing
700 [20130423-11:31:47.481] net: context created: /graph/
700 [20130423-11:31:47.483] net: context created: /graph_data
INF [20130423-11:31:47.484] admin: Starting TimeSeriesCollector
INF [20130423-11:31:47.486] admin: Admin HTTP interface started on
port 9900.
INF [20130423-11:31:47.491] builder: Building 1 stores:
List(<function0>)
DEB [20130423-11:31:47.508] nio: Using the autodetected NIO constraint
level: 0
INF [20130423-11:31:47.621] collector: Starting WriteQueueWorker
INF [20130423-11:31:47.621] collector: Starting WriteQueueWorker
INF [20130423-11:31:47.622] collector: Starting WriteQueueWorker
INF [20130423-11:31:47.622] collector: Starting WriteQueueWorker
INF [20130423-11:31:47.623] collector: Starting WriteQueueWorker
INF [20130423-11:31:47.625] collector: Starting WriteQueueWorker
INF [20130423-11:31:47.625] collector: Starting WriteQueueWorker
INF [20130423-11:31:47.626] collector: Starting WriteQueueWorker
INF [20130423-11:31:47.626] collector: Starting WriteQueueWorker
INF [20130423-11:31:47.627] collector: Starting WriteQueueWorker
INF [20130423-11:31:47.637] builder: Starting collector service on
addr /0.0.0.0:9410
DEB [20130423-11:31:47.827] cassie: Received: %s
DEB [20130423-11:31:47.827] cassie: Received: %s
700 [20130423-11:31:47.862] net: context created: /config/sampleRate
INF [20130423-11:31:47.873] query: Starting query thrift service on
addr /0.0.0.0:9411
DEB [20130423-11:31:48.040] processor: Processing span:
Span(123,method,
123,Some(123),ArrayBuffer(Annotation(123000000,cs,Some(Endpoint(16843009,1,service)),None),
Annotation(123000000,cr,Some(Endpoint(16843009,1,service)),None)),ArrayBuffer(BinaryAnnotation(key,java.nio.HeapByteBuffer[pos=0
lim=5 cap=5],Bytes,Some(Endpoint(16843009,1,service)))),false) from
CgABAAAAAAAAAHsLAAMAAAAGbWV0aG9kCgAEAAAAAAAAAHsKAAUAAAAAAAAAew8ABgwAAAACCgABAAAAAAdU1MALAAIAAAACY3MMAAMIAAEBAQEBBgACAAELAAMAAAAHc2VydmljZQAACgABAAAAAAdU1MALAAIAAAACY3IMAAMIAAEBAQEBBgACAAELAAMAAAAHc2VydmljZQAADwAIDAAAAAELAAEAAAADa2V5CwACAAAABXZhbHVlCAADAAAAAQwABAgAAQEBAQEGAAIAAQsAAwAAAAdzZXJ2aWNlAAAA
DEB [20130423-11:31:48.059] query: getTracesByIds. ArrayBuffer(123)
adjust ArrayBuffer()
DEB [20130423-11:31:48.107] cassie: multiget_slice(Zipkin, [123],
ColumnParent(column_family:Traces),
SlicePredicate(slice_range:SliceRange(start:, finish:, reversed:false,
count:100000)), ONE)
DEB [20130423-11:31:48.122] cassie: insert(Zipkin, 123,
ColumnParent(column_family:Traces), Column(name:31 32 33 5F 2D 31 39
31 33 32 37 32 33 39 31, value:E5 01 0C 0A 00 01 00 09 01 40 7B 0B 00
03 00 00 00 06 6D 65 74 68 6F 64 0A 00 04 09 17 10 00 7B 0A 00 05 11
0B 1C 0F 00 06 0C 00 00 00 02 0D 36 18 07 54 D4 C0 0B 00 02 01 12 1C
63 73 0C 00 03 08 00 01 01 01 10 06 00 02 00 01 09 4E 28 07 73 65 72
76 69 63 65 00 00 0A 09 69 2E 33 00 00 72 7A 33 00 20 0F 00 08 0C 00
00 00 01 0B 05 3B 0C 03 6B 65 79 09 6D 18 05 76 61 6C 75 65 08 05
AF..., timestamp:1366687908117000, ttl:604800), ONE)
DEB [20130423-11:31:48.145] cassie: insert(Zipkin, service.method,
ColumnParent(column_family:ServiceSpanNameIndex), Column(name:00 00 00
00 07 54 D4 C0, value:00 00 00 00 00 00 00 7B, timestamp:
1366687908145000, ttl:259200), ONE)
DEB [20130423-11:31:48.149] cassie: insert(Zipkin,
java.nio.HeapByteBuffer[pos=0 lim=11 cap=11],
ColumnParent(column_family:ServiceNameIndex), Column(name:00 00 00 00
07 54 D4 C0, value:00 00 00 00 00 00 00 7B, timestamp:
1366687908149000, ttl:259200), ONE)
DEB [20130423-11:31:48.178] cassie: batch_mutate(Zipkin,
{java.nio.HeapByteBuffer[pos=0 lim=15
cap=15]={AnnotationsIndex=[Mutation(column_or_supercolumn:ColumnOrSuperColumn(column:Column(name:
00 00 00 00 07 54 D4 C0, value:00 00 00 00 00 00 00 7B, timestamp:
1366687908174000, ttl:259200)))]}, java.nio.HeapByteBuffer[pos=0
lim=21
cap=21]={AnnotationsIndex=[Mutation(column_or_supercolumn:ColumnOrSuperColumn(column:Column(name:
00 00 00 00 07 54 D4 C0, value:00 00 00 00 00 00 00 7B, timestamp:
1366687908170000, ttl:259200)))]}}, ONE)
DEB [20130423-11:31:48.187] cassie: insert(Zipkin,
java.nio.HeapByteBuffer[pos=0 lim=16 cap=16],
ColumnParent(column_family:ServiceNames), Column(name:73 65 72 76 69
63 65, value:, timestamp:1366687908187000, ttl:259200), ONE)
DEB [20130423-11:31:48.192] cassie: insert(Zipkin,
java.nio.HeapByteBuffer[pos=0 lim=11 cap=11],
ColumnParent(column_family:SpanNames), Column(name:6D 65 74 68 6F 64,
value:, timestamp:1366687908191000, ttl:259200), ONE)
DEB [20130423-11:31:48.202] cassie: batch_mutate(Zipkin,
{java.nio.HeapByteBuffer[pos=0 lim=8
cap=8]={DurationIndex=[Mutation(column_or_supercolumn:ColumnOrSuperColumn(column:Column(name:
00 00 00 00 07 54 D4 C0, value:, timestamp:1366687908201000, ttl:
259200))),
Mutation(column_or_supercolumn:ColumnOrSuperColumn(column:Column(name:
00 00 00 00 07 54 D4 C0, value:, timestamp:1366687908202000, ttl:
259200)))]}}, ONE)
DEB [20130423-11:31:48.233] query: tracesExist. ArrayBuffer(123, 5)
DEB [20130423-11:31:48.236] cassie: multiget_slice(Zipkin, [123, 5],
ColumnParent(column_family:Traces),
SlicePredicate(slice_range:SliceRange(start:, finish:, reversed:false,
count:1)), ONE)
INF [20130423-11:31:48.289] collector: Shutting down collector thrift
service.
INF [20130423-11:31:48.295] admin: TimeSeriesCollector exiting by
request.
INF [20130423-11:31:48.296] admin: TimeSeriesCollector exiting.
INF [20130423-11:31:48.630] collector: WriteQueueWorker exiting.
INF [20130423-11:31:48.632] collector: WriteQueueWorker exiting.
INF [20130423-11:31:48.632] collector: WriteQueueWorker exiting.
INF [20130423-11:31:48.630] collector: WriteQueueWorker exiting.
INF [20130423-11:31:48.630] collector: WriteQueueWorker exiting.
INF [20130423-11:31:48.630] collector: WriteQueueWorker exiting.
INF [20130423-11:31:48.632] collector: WriteQueueWorker exiting.
INF [20130423-11:31:48.634] collector: WriteQueueWorker exiting.
INF [20130423-11:31:48.634] collector: WriteQueueWorker exiting.
INF [20130423-11:31:48.708] collector: WriteQueueWorker exiting.
INF [20130423-11:31:48.715] stats: LatchedStatsListener exiting by
request.
INF [20130423-11:31:48.716] stats: LatchedStatsListener exiting.
INF [20130423-11:31:48.724] query: Shutting down query thrift service.
[info] + ZipkinCollector and ZipkinQuery should
[info] + collect a trace, then return it when requested from the
query daemon
[info] Passed: : Total 2, Failed 0, Errors 0, Passed 2, Skipped 0
[info] Building zipkin-test-c7b50693.zip from 92 files.
[success] Total time: 82 s, completed 2013-4-23 11:31:51

Johan Oskarsson

unread,
Apr 25, 2013, 8:37:25 PM4/25/13
to zipki...@googlegroups.com
Hi Fei,

At the end of the output you attached to the email I see these lines:

[info] Building zipkin-test-c7b50693.zip from 92 files.
[success] Total time: 82 s, completed 2013-4-23 11:31:51

So it looks like it is creating a dist zip file somewhere. Does that file not exist anywhere after the build is done?

/Johan
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "zipkin-user" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to zipkin-user...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

Message has been deleted

Fei Fan

unread,
Apr 26, 2013, 1:26:52 AM4/26/13
to zipki...@googlegroups.com

It does, thanks for the reply. It seems the build is OK, all dist zip files are generated. It's just some exceptsions are thrown during the build. I managed to start up the query/collector/web service, and see the sample traces generated by zipkin-test. I plan to instrument a customized app in our company in next step.

Best if you can shed some light on the exceptions in the build log.
Reply all
Reply to author
Forward
0 new messages