Sensor table Hbase

99 views
Skip to first unread message

Andrei Ghiciac

unread,
Jun 12, 2017, 10:50:29 AM6/12/17
to Hogzilla Users
Hello,

    I've tried to install Hogzilla several times but every time i get the "Sensor table is empty in HBase. Hogzilla didn't run for the first time" from pigtail.

I've tried to run sflow manually to see if any output is generated but nothing shows on the screen. Any idea how i could make it work ?

OS : Debian 8.4

Package versions :

SFLOWTOOL_VERSION=3.41
XDG_SESSION_ID=12
SPARK_HOME=/home/hogzilla/spark
TERM=xterm-256color
SHELL=/bin/bash
HADOOP_HOME=/home/hogzilla/hadoop
DERBY_HOME=/usr/lib/jvm/java-8-oracle/db
TMP_FILE=/tmp/.hzinstallation.temp
YARN_HOME=/home/hogzilla/hadoop
USER=hogzilla
HBASE_HOME=/home/hogzilla/hbase
HADOOP_COMMON_LIB_NATIVE_DIR=/lib/native
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/sbin:/bin
MAIL=/var/mail/hogzilla
SPARK_VERSION=2.1.1
HADOOP_HDFS_HOME=
HADOOP_COMMON_HOME=
PWD=/home/hogzilla
HADOOP_VERSION=2.8.0
JAVA_HOME=/usr/lib/jvm/java-8-oracle
HADOOP_INSTALL=
HADOOP_CONF_DIR=/etc/hadoop
LANG=en_US.UTF-8
HADOOP_OPTS=-Djava.library.path=/lib
HADOOPDATA=/home/hogzilla/hadoop_data
HOME=/home/hogzilla
SHLVL=2
HBASE_VERSION=1.3.1
HADOOP_MAPRED_HOME=
LOGNAME=hogzilla
CLASSPATH=:/home/hogzilla/hbase/lib/*
J2SDKDIR=/usr/lib/jvm/java-8-oracle
J2REDIR=/usr/lib/jvm/java-8-oracle/jre

Paulo Angelo

unread,
Jun 12, 2017, 4:16:58 PM6/12/17
to Hogzilla Users
Hi Andrei Ghiciac,

    It appears that Hogzilla IDS didn't run yet. Execute

~/bin/stop-hogzilla.sh
~/bin/start-hogzilla.sh


    and check the logs at /tmp/hogzilla.log .

    Maybe you will also need to check if the sFlows are comming to HBase. For this:

a) Connect to HBase

$ ~/hbase/bin/hbase shell

b) Count the received sflows

$ count 'hogzilla_sflows'

     The sFlows can be sent by a router or by a Linux box connected at a mirror port. How are you doing it?

Best regards,

Paulo Angelo

Andrei Ghiciac

unread,
Jun 15, 2017, 4:15:37 AM6/15/17
to Hogzilla Users
Hello Paulo,

I've checked the logs at hogzilla.log and the following exceptions are present :


17/06/15 08:11:20 INFO yarn.Client: Application report for application_1497278430484_0012 (state: ACCEPTED)
17/06/15 08:11:21 INFO yarn.Client: Application report for application_1497278430484_0012 (state: ACCEPTED)
17/06/15 08:11:22 INFO yarn.Client: Application report for application_1497278430484_0012 (state: ACCEPTED)
17/06/15 08:11:23 INFO yarn.Client: Application report for application_1497278430484_0012 (state: ACCEPTED)
17/06/15 08:11:24 INFO yarn.Client: Application report for application_1497278430484_0012 (state: ACCEPTED)
17/06/15 08:11:25 INFO yarn.Client: Application report for application_1497278430484_0012 (state: ACCEPTED)
17/06/15 08:11:26 INFO yarn.Client: Application report for application_1497278430484_0012 (state: ACCEPTED)
17/06/15 08:11:27 INFO yarn.Client: Application report for application_1497278430484_0012 (state: ACCEPTED)
17/06/15 08:11:28 INFO yarn.Client: Application report for application_1497278430484_0012 (state: FINISHED)
17/06/15 08:11:28 INFO yarn.Client:
 client token
: N/A
 diagnostics
: Uncaught exception: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested virtual cores < 0, or requested virtual cores > max configured, requestedVirtualCores=7, maxVirtualCores=4
 at org
.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:288)
 at org
.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:248)
 at org
.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:264)
 at org
.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:206)
 at org
.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:464)
 at org
.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
 at org
.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
 at org
.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
 at org
.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
 at org
.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
 at org
.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
 at java
.security.AccessController.doPrivileged(Native Method)
 at javax
.security.auth.Subject.doAs(Subject.java:422)
 at org
.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
 at org
.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)


 
ApplicationMaster host: 192.168.100.7
 
ApplicationMaster RPC port: 0
 queue
: default
 start time
: 1497514278785
 
final status: FAILED
 tracking URL
: http://siem.novalocal:8088/proxy/application_1497278430484_0012/
 user
: hogzilla
Exception in thread "main" org.apache.spark.SparkException: Application application_1497278430484_0012 finished with failed status
 at org
.apache.spark.deploy.yarn.Client.run(Client.scala:1180)
 at org
.apache.spark.deploy.yarn.Client$.main(Client.scala:1226)
 at org
.apache.spark.deploy.yarn.Client.main(Client.scala)
 at sun
.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun
.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun
.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java
.lang.reflect.Method.invoke(Method.java:498)
 at org
.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
 at org
.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
 at org
.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
 at org
.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
 at org
.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/06/15 08:11:28 INFO util.ShutdownHookManager: Shutdown hook called
17/06/15 08:11:28 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-b370f33b-062b-408c-b01c-706c1f0fc806


I'm using host sflow to simulate it before i do the network port mirroring.

The count of hogzilla sflows is 0.

Thank you

Paulo Angelo

unread,
Jun 15, 2017, 4:19:59 PM6/15/17
to Hogzilla Users
Andrei Ghiciac,

    Change "--executor-cores 7" to "--executor-cores 4" in the file "~/bin/start-hogzilla.sh". Then, stop and start hogzilla again:

~/bin/stop-hogzilla.sh
~/bin/start-hogzilla.sh

     Check the logs at /tmp/hogzilla.log to see if it is ok.

      You should have "sflowtool -p 6343 -l | ~/bin/sflow2hz -h 127.0.0.1 -p 9090 &>/dev/null" running to collect sflows. Check if it's running (ps auxw|grep flow), otherwise, run "~/bin/start-sflow2hz.sh ".  This collector listens on port 6343 for the flows.  A router (or another system like a Linux) should create and send the flows.

What are you using to create and send sflows in your local server?

Regards,

Paulo Angelo

Andrei Ghiciac

unread,
Jun 16, 2017, 4:12:45 AM6/16/17
to Hogzilla Users
Hello Paulo,

 Now it does not give any more errors after changing the executor cores. Im using hsflowd to generate samples and send it to the local agent of sflow but no messages seem to appear in graylog and also the count of hogzila_sflows in hbase is 0.

 My hsflowd config is :

sflow {
  collector
{ ip=127.0.0.1 udpport=6343 }
  pcap
{ dev=eth0 }
  tcp
{}
}


Processes :

# ps aux | grep -i flow
hogzilla  3078  0.0  0.0  13208  2196 ?        S    Jun12   0:00 /bin/bash /home/hogzilla/bin/start-sflow2hz.sh
root     27725  0.8  0.0  89724  2272 pts/3    Sl+  08:06   0:03 hsflowd -P -dd -f /etc/hsflowd.conf
hogzilla 27920  0.0  0.0   4148   676 pts/4    S+   08:08   0:00 sflowtool -p 6343 -l
hogzilla 27921  0.0  0.0  41204  2140 pts/4    S+   08:08   0:00 /home/hogzilla/bin/sflow2hz -h 127.0.0.1 -p 9090



Paulo Angelo

unread,
Jun 16, 2017, 11:27:55 AM6/16/17
to Andrei Ghiciac, Hogzilla Users
Hi Andrei Ghiciac,

    Cool! Just check if you get a "SUCCEED" at the end of "/tmp/hogzilla.log".

    Now, lets check why the sFlows have not been saved into HBase.  For this:

a) Stop sflow2hz:

$ ~/bin/stop-sflow2hz.sh

b) Run sflowtool and check if you get flows. Just generate traffic (access some sites), wait some seconds and check if appear lines beginning with "FLOW". Stop with <ctrl>+c .

$ sflowtool -p 6343 -l

c) If it is ok, try to add it into HBase and check if appears an error.

$ sflowtool -p 6343 -l | ~/bin/sflow2hz -h 127.0.0.1 -p 9090


If you don't get flows at "b", you may have problems with hsflowd . If you have errors in "c", send them for us to have a clue.

Best regards,

Paulo Angelo




--
You received this message because you are subscribed to the Google Groups "Hogzilla Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hogzilla+unsubscribe@googlegroups.com.
To post to this group, send email to hogz...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hogzilla/20774990-8959-4c87-bb5b-5aac15580b1e%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Andrei Ghiciac

unread,
Jun 29, 2017, 5:05:57 AM6/29/17
to Hogzilla Users, aghi...@gmail.com, p...@pauloangelo.com
Hello Paulo,

Sorry for not saying anything but i was caught in some projects.

I've looked again today on the Hogzilla instance i was testing and i found something in the /tmp/hogzilla.log :

17/06/29 08:58:20 INFO yarn.Client: Application report for application_1497278430484_0073 (state: FINISHED)
17/06/29 08:58:20 INFO yarn.Client: 
client token: N/A
diagnostics: User class threw exception: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Thu Jun 29 08:58:19 UTC 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68475: com.google.protobuf.ServiceException: java.lang.NoClassDefFoundError: com/yammer/metrics/core/Gauge row 'hogzilla_flows,,00000000000000' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=siem.novalocal,16201,1497278446413, seqNum=0

ApplicationMaster host: 192.168.100.7
ApplicationMaster RPC port: 0
queue: default
start time: 1498726582092
final status: FAILED
user: hogzilla
Exception in thread "main" org.apache.spark.SparkException: Application application_1497278430484_0073 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1180)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1226)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/06/29 08:58:20 INFO util.ShutdownHookManager: Shutdown hook called
17/06/29 08:58:20 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-dc2a87b2-0a8d-4fe2-8595-af5572b10f02



Any idea whats wrong with it ? Anything else i should check ?

Thank you very much for your time helping out.
To unsubscribe from this group and stop receiving emails from it, send an email to hogzilla+u...@googlegroups.com.

To post to this group, send email to hogz...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages