Kerberos Secured Hadoop Cluster Support

912 views
Skip to first unread message

TheAjitator

unread,
Apr 14, 2017, 5:29:41 PM4/14/17
to JanusGraph users list
I haven't really found anything that suggests that Janus does support accessing a Kerberos secured Hadoop cluster.

Has anyone tried this?  Does this work?  Should I create a request to have this feature implemented?

Thanks.

Jerry He

unread,
Apr 14, 2017, 6:12:41 PM4/14/17
to JanusGraph users list
It would work. 
But of course you would need to provide Kerberos configuration setup on the JanusGraph side (which is client to HBase, Solr, etc) to make them talk to their Kerberos enabled servers.

Thanks.

Jerry 

HadoopMarc

unread,
Apr 15, 2017, 7:31:05 AM4/15/17
to JanusGraph users list
Yes, it works. Hadoop, HBase, etc. clients called by gremlin-console and gremlin-server will find any locally stored kerberos credentials pointed to by KRB5CCNAME, KRB5_KTNAME and KRB5_CONFIG. You can set:   export KRB5_TRACE=/dev/stdout  to get more debug info if any problems arise.

In TinkerPop 3.3 gremlin-server will also have a kerberos authenticator for kerberized access to gremlin-server. A PR for this, written by me, has been merged recently into TinkerPop.

Cheers,     Marc

Op zaterdag 15 april 2017 00:12:41 UTC+2 schreef Jerry He:

Jerry He

unread,
Apr 17, 2017, 10:21:19 PM4/17/17
to JanusGraph users list


On Saturday, April 15, 2017 at 4:31:05 AM UTC-7, HadoopMarc wrote:
Yes, it works. Hadoop, HBase, etc. clients called by gremlin-console and gremlin-server will find any locally stored kerberos credentials pointed to by KRB5CCNAME, KRB5_KTNAME and KRB5_CONFIG. You can set:   export KRB5_TRACE=/dev/stdout  to get more debug info if any problems arise.

Gremlin server would be better served by having a keytab based acquisition of kerberos credentials because it is a long running process.
 
In TinkerPop 3.3 gremlin-server will also have a kerberos authenticator for kerberized access to gremlin-server. A PR for this, written by me, has been merged recently into TinkerPop.

Could you give the JIRA?

Thanks,

Jerry

HadoopMarc

unread,
Apr 18, 2017, 2:19:42 AM4/18/17
to JanusGraph users list
Sure, you can find it at:

https://github.com/apache/tinkerpop/pull/534
https://issues.apache.org/jira/browse/TINKERPOP-1566

Cheers,    Marc

Op dinsdag 18 april 2017 04:21:19 UTC+2 schreef Jerry He:

mata...@gmail.com

unread,
Nov 28, 2017, 11:59:54 AM11/28/17
to JanusGraph users
Hi there,

Sorry for bumping an old thread. Did you get it to work in the end? I'm in the same boat at the moment, trying to hook up JanusGraph to HBase that is part of a Kerberized Hadoop cluster.

Thanks!

Kind regards,

Gyuszi

HadoopMarc

unread,
Nov 28, 2017, 3:05:24 PM11/28/17
to JanusGraph users
Hi Gyuszi,

What do you want to achieve? Do you want to connect to use JanusGraph directly from the console? Please describe.

If the JIRA ticket above is relevant (depends on what you want!), in the mean time it was released in 3.3.0:
http://tinkerpop.apache.org/docs/current/reference/#_security

Cheers,    Marc

Op dinsdag 28 november 2017 17:59:54 UTC+1 schreef Gyuszi Kovács:

mata...@gmail.com

unread,
Nov 28, 2017, 3:33:36 PM11/28/17
to JanusGraph users
Hi Marc,

I'm rather new to the world of BigData, Hadoop, and the lot, please excuse me if my question or the situation seems trivial.

We have a couple of Hadoop clusters at work, for now I get to play around with one that is hosted on Amazon EC2. The cluster has two master nodes, two worker nodes, and two kafka nodes. We are running the Hortonworks Data Platform on the master and worker nodes, and Confluent-Kafka is configured on the Kafka nodes.

HBase is up and running on the master node. The whole cluster is Kerberized, and so this is usually the cause of most of our troubles when we are trying to connect different tools to part of the cluster.

In this case, I'm trying to set up JanusGraph on the master node to connect to HBase - I am initiating the connection from the gremlin console.

Previous experience suggests that I should provide some environmental variables to either the gremlin console or JanusGraph, such as where to look for the kerb5 config file and the jaas config file, but I have not managed to figure out how to feed these config files to them.

This is all I can recall for now, but once I'm back at work tomorrow I can post some config files and logs as needed.

Thank you for your help!

Kind regards,
Gyuszi

HadoopMarc

unread,
Nov 29, 2017, 9:16:48 AM11/29/17
to JanusGraph users
Hi Gyuszi,

There are the following possible authentications in play:
  • you want to access a JanusGraph/HBase table from the gremlin console as a logged in user. This only requires kinit to provide a valid Kerberos ticket. Hadoop or HBase clients called by JanusGraph will pick up the ticket automatically from the KRB5CCNAME.
  • you want gremlin-server to access a JanusGraph/HBase table. Normally, you would have a janusgraph user on your system and a Kerberos keytab for accessing Hadoop and HBase. I believe Hadoop or HBase clients called by JanusGraph will pick up the keytab automatically if you set the KRB5_KTNAME correctly (if not, you also have to provide a specific jaas config to accept keytabs)
  • you want a logged-in user to authenticate to gremlin-server. This is the aspect covered in the link I sent earlier.
Cheers,    Marc
Op dinsdag 28 november 2017 21:33:36 UTC+1 schreef Gyuszi Kovács:

mata...@gmail.com

unread,
Jan 18, 2018, 7:35:32 AM1/18/18
to JanusGraph users
Hi Marc,

Happy new year! :) Apologies for the late response, I decided to focus my efforts on other topics, but I'm back in the new year with fresh eyes and a drive to figure this one out.

This week I started from scratch, with a local VM running Ubuntu using VirtualBox. On this VM I tested running HBase in it's simplest mode (using the QuickStart guide on Apache.org). I was able to connect to it from the gremlin console running on the same host (again, I followed the easiest steps provided in the getting started guide for JanusGraph).

In the next step I deployed a local instance of Zookeeper, followed by kerberizing both HBase and Zookeeper (using keytabs; I created local user accounts hbase and zookeeper, the respective processes are run by them). The kerberized HBase instance connected to the kerberized Zookeeper instance, they seem to like each other, no issues currently.

However, when I tried to connect to the newly kerberized HBase instance (following the same steps as before), I wasn't able to. The following errors are written on the screen (uploaded to paste2.org for easier readability). The steps that I tried:
The gremlin user has the following environment variables exported:
KRB5_CONFIG=/etc/krb5.conf
KRB5CCNAME
=/tmp/krb5cc_1003
KRB5_KTNAME
=/etc/security/keytabs/gremlin.keytab
Enter code here...




He also has a ticket from kerberos (obtained by kinit -kt /etc/security/keytabs/gremlin.keytab gremlin/jkovacs-V...@EXAMPLE.COM)
gremlin@jkovacs-VirtualBox:/opt/janusgraph-0.2.0-hadoop2$ klist
Ticket cache: FILE:/tmp/krb5cc_1003
Default principal: gremlin/jkovacs-VirtualBox@EXAMPLE.COM

Valid starting      Expires             Service principal
18.1.2018 12:37:26  18.1.2018 22:37:26  krbtgt/EXAMPLE.COM@EXAMPLE.COM
    renew
until 19.1.2018 12:37:25
gremlin@jkovacs
-VirtualBox:/opt/janusgraph-0.2.0-hadoop2$

I also placed the java.env and jaas.conf files into the conf dir of janusgraph.

gremlin@jkovacs-VirtualBox:/opt/janusgraph-0.2.0-hadoop2/conf$ cat java.env
export JVMFLAGS="-Djava.security.auth.login.config=/opt/janusgraph-0.2.0-hadoop2/conf/jaas.conf"
gremlin@jkovacs
-VirtualBox:/opt/janusgraph-0.2.0-hadoop2/conf$ cat jaas.conf
Server {
  com
.sun.security.auth.module.Krb5LoginModule required
  useKeyTab
=true
  keyTab
="/etc/security/keytabs/gremlin.keytab"
  storeKey
=true
  useTicketCache
=false
  principal
="gremlin/jkovacs-V...@EXAMPLE.COM";
};

Client {
  com
.sun.security.auth.module.Krb5LoginModule required
  useKeyTab
=true
  keyTab
="/etc/security/keytabs/gremlin.keytab"
  storeKey
=true
  useTicketCache
=false
  principal
="gremlin/jkovacs-V...@EXAMPLE.COM";
};
gremlin@jkovacs
-VirtualBox:/opt/janusgraph-0.2.0-hadoop2/conf$

With all this said and done, I still was not able to connect to HBase. Any and all help is appreciated :))

Kind regards,
Gyuszi

HadoopMarc

unread,
Jan 18, 2018, 9:34:15 AM1/18/18
to JanusGraph users
Hi Gyuszi,

Let's check two things first:

 - you started the gremlin console as user gremlin/jkovacs-VirtualBox
 - you can start hbase shell as user gremlin/jkovacs-VirtualBox

Also, do you want to authenticate towards HBase with a keytab (which is usual for gremlin server) or with a Ticket Granting Ticket (which is usual for a loggged in end user wanting to use gremlin console)?

Cheers,    Marc

Op donderdag 18 januari 2018 13:35:32 UTC+1 schreef Gyuszi Kovács:
Hi Marc,


  principal
="gremlin/jkovacs-Virtu...@EXAMPLE.COM";

};

Client {
  com
.sun.security.auth.module.Krb5LoginModule required
  useKeyTab
=true
  keyTab
="/etc/security/keytabs/gremlin.keytab"
  storeKey
=true
  useTicketCache
=false

  principal
="gremlin/jkovacs-Virtu...@EXAMPLE.COM";
};
gremlin@jkovacs
-VirtualBox:/opt/janusgraph-0.2.0-hadoop2/conf$

marc.de...@gmail.com

unread,
Jan 18, 2018, 10:12:04 AM1/18/18
to JanusGraph users
I checked for you:

Starting from a working config with gremlin console to log in to JanusGraph/HBase for logging in with a TGT, after a kdestroy I get the following error in the console

Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)

So it seems you have additional problems. Also, you have to do a list command in hbase shell to get the GSS error without proper Kerberos authentication.

Cheers,    Marc

Op donderdag 18 januari 2018 15:34:15 UTC+1 schreef HadoopMarc:

mata...@gmail.com

unread,
Jan 18, 2018, 10:17:10 AM1/18/18
to JanusGraph users
Hi Marc,

Thank you for the hints, I'm indeed having additional problems. I'll get back to you once I figure those out (seems like Hbase and Zookeeper don't like each other that much after all!)

Kind regards,

Gyuszi

Gyuszi Kovács

unread,
Jan 22, 2018, 9:41:59 AM1/22/18
to JanusGraph users
Hi Marc,

Building a working Kerberized HDFS+Hbase+Zookeeper system in a Ubuntu VM, from the ground up, piece by piece, is proving to be a bit more complicated than I anticipated. I think I will leave that part of my endeavor for another time.

With that said, I still have access to the AWS sandbox and another physical cluster we have in-house for our dev-team. Both of these clusters are running the Hortonworks Data Platform (HDP).

I tried the same steps on both systems:
- logged in as a normal user (jkovacs),did a kinit and got a ticket for myself; then I launched the hbase shell and was able to get an output for the list command (it threw GSS errors without the Kerberos ticket).
- I then proceeded to launch the gremlin console as jkovacs, and then entered: gremlin> graph = JanusGraphFactory.open('/opt/janusgraph-0.2.0-hadoop2/conf/janusgraph-hbase.properties') ; the result = these errors popped up

I'm pretty much back at square one. What I'd like to achieve is to be able to run the gremlin console and connect to HBase as a logged in end user, using TGT.


Kind regards,

Gyuszi

On Thursday, January 18, 2018 at 4:12:04 PM UTC+1, marc.de...@gmail.com wrote:

HadoopMarc

unread,
Jan 22, 2018, 11:36:20 AM1/22/18
to JanusGraph users
Hi Gyuszi,

OK, situation clear. Let's check:

  • before calling gremlin you have something to have gremlin find your hadoop and hbase conf directories:
    export CLASSPATH=/usr/hdp/current/hadoop-client/conf:/usr/hdp/current/hbase-client/conf
    Do not add the HDP lib dirs to your classpath, though.
  • check that hdfs.ls('') from the gremlin-console does the same as hdfs dfs -ls from the command line
  • the storage.hostname line in your properties file has your entire zookeeper quorum (about three comma-separated fully-qualified hostnames without port number)

If this seems all OK, delete your table (assuming it is empty now and contains no valuable data) using hbase shell and restart from scratch with opening the table from gremlin-console. Just to be sure that you start with a clean situation.


Disclaimer: I do not understand how your situation arose, I just mention some configs from my own setup with HDP-2.6.2 which I think might be relevant.


Cheers,    Marc



Op maandag 22 januari 2018 15:41:59 UTC+1 schreef Gyuszi Kovács:

Gyuszi Kovács

unread,
Jan 23, 2018, 5:12:42 AM1/23/18
to JanusGraph users
Hi Marc,

Thank you for the hints and tips. I performed the suggested changes. After I export CLASSPATH, the gremlin shell refuses to start altogether.

jkovacs@hadoop.lan:[/opt/janusgraph-0.2.0-hadoop2]: ./bin/gremlin.sh

         
\,,,/
         
(o o)
-----oOOo-(3)-oOOo-----
plugin activated
: janusgraph.imports
plugin activated
: tinkerpop.server
plugin activated
: tinkerpop.utilities
SLF4J
: Class path contains multiple SLF4J bindings.
SLF4J
: Found binding in [jar:file:/opt/janusgraph-0.2.0-hadoop2/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J
: Found binding in [jar:file:/opt/janusgraph-0.2.0-hadoop2/lib/logback-classic-1.1.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J
: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J
: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
09:07:25 WARN  org.apache.hadoop.util.NativeCodeLoader  - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" org.apache.tinkerpop.gremlin.groovy.plugin.PluginInitializationException: No FileSystem for scheme: hdfs
        at org
.apache.tinkerpop.gremlin.hadoop.groovy.plugin.HadoopGremlinPlugin.afterPluginTo(HadoopGremlinPlugin.java:91)
        at org
.apache.tinkerpop.gremlin.groovy.plugin.AbstractGremlinPlugin.pluginTo(AbstractGremlinPlugin.java:86)
        at org
.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)
        at org
.apache.tinkerpop.gremlin.console.PluggedIn.activate(PluggedIn.groovy:58)
        at org
.apache.tinkerpop.gremlin.console.Console$_closure19.doCall(Console.groovy:146)
        at sun
.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun
.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun
.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java
.lang.reflect.Method.invoke(Method.java:498)
        at org
.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
        at groovy
.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
        at org
.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:294)
        at groovy
.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
        at groovy
.lang.Closure.call(Closure.java:414)
        at groovy
.lang.Closure.call(Closure.java:430)
        at org
.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2040)
        at org
.codehaus.groovy.runtime.DefaultGroovyMethods.each(DefaultGroovyMethods.java:2025)
        at org
.codehaus.groovy.runtime.dgm$158.doMethodInvoke(Unknown Source)
        at org
.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)
        at org
.apache.tinkerpop.gremlin.console.Console.<init>(Console.groovy:133)
        at org
.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)
        at org
.apache.tinkerpop.gremlin.console.Console.main(Console.groovy:478)
Caused by: java.io.IOException: No FileSystem for scheme: hdfs
        at org
.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644)
        at org
.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
        at org
.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
        at org
.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
        at org
.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
        at org
.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
        at org
.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
        at org
.apache.tinkerpop.gremlin.hadoop.groovy.plugin.HadoopGremlinPlugin.afterPluginTo(HadoopGremlinPlugin.java:84)
       
... 21 more

jkovacs@hadoop.lan:[/opt/janusgraph-0.2.0-hadoop2]: echo $CLASSPATH
/usr/hdp/current/hadoop-client/conf:/usr/hdp/current/hbase-client/conf
jkovacs@hadoop
.lan:[/opt/janusgraph-0.2.0-hadoop2]: echo $PATH
/usr/lib64/qt-3.3/bin:/opt/Python-3.6.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/dell/srvadmin/bin:/home/jkovacs/bin

Without exporting CLASSPATH gremlin starts, but the output of hdfs.ls() is different than that of hdfs dfs -ls from the console. The gremlin output lists the content of my linux home folder.
I modified the storage.hostname line in janusgraph-hbase.properties to include all three zookeeper nodes.

As for how this situation arose - back in November a colleague of mine requested that I get JanusGraph working in our dev environment, unfortunately I wasn't able to fulfill his request as of yet :/

Thanks for your help so far Marc!

Kind regards,
Gyuszi

Gyuszi Kovács

unread,
Jan 24, 2018, 5:23:14 AM1/24/18
to JanusGraph users
Hi Marc,

I manually copied the hadoop-hdfs-2.7.2.jar (*PS) file to the /opt/janusgraph-0.2.0-hadoop2/lib directory, the gremlin shell is functional again with the exported CLASSPATH  variable. The output of hdfs.ls() still differs from that of hdfs dfs -ls, and when trying graph = JanusGraphFactory.open('/opt/janusgraph-0.2.0-hadoop2/conf/janusgraph-hbase.properties') the same errors appear as before :

10:43:04 WARN  org.janusgraph.diskstorage.hbase.HBaseStoreManager  - Unexpected exception during getDeployment()
java.lang.RuntimeException: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend

All this with my local account "jkovacs", with a valid ticket from Kerberos, and I'm also able to list the tables in the hbase shell.

Kind regards,
Gyuszi

*PS: in /usr/hdp/current/hadoop-hdfs-client/ the hadoop-hdfs.jar file points to hadoop-hdfs-2.7.3.2.6.0.3-8.jar, but the gremlin shell wouldn't play nice with that one, so I went with hadoop-hdfs-2.7.2.jar

Gyuszi Kovács

unread,
Jan 24, 2018, 9:42:50 AM1/24/18
to JanusGraph users
Hi Marc,

Progress! A more experienced colleague had a moment to help me out a bit and pointed out some faulty settings in the janusgraph-hbase.properties file. After updating the settings to these:

gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage
.backend=hbase
storage
.hostname=<list-of-zookeeper-servers>
storage
.hbase.table=janusgraf

it finally works (CLASSPATH had to be exported, as you recommended); hdfs.ls() produces the same output, and the command graph = JanusGraphFactory.open('/opt/
janusgraph-0.2.0-hadoop2/conf/janusgraph-hbase.properties') executes without errors

==>standardjanusgraph[hbase:[<list-of-zookeeper-hostnames>]

One additional thing that had to be done was to grant my user jkovacs access rights in Ranger (previously I had super-user access and so it wasn't an issue, but it is good to be aware of this as well).

Thank you again for your time and effort, I hope somebody else will find this topic also useful.

Kind regards,

Gyuszi

HadoopMarc

unread,
Jan 24, 2018, 11:19:19 AM1/24/18
to JanusGraph users
Hi Gyuszi,

Glad that you did not gave up!

Btw, you also hit this issue: https://github.com/JanusGraph/janusgraph/commit/91244596e1ad75e7389a4d8f44d528b34073714d

@JanusGraph team: I just made a fresh download of the janusgraph-0.2.0 zip archive and the hadoop client is still missing. This is very confusing to new users. Anyone who reads this: you better clone the git repo, apply the commit linked above and do your own build (from memory: mvn clean install -DskipTests -Pjanusgraph-release)/ Results are in janusgraph-dist/target.

Cheers,     Marc

Op woensdag 24 januari 2018 15:42:50 UTC+1 schreef Gyuszi Kovács:

ltn8...@gmail.com

unread,
Jan 24, 2018, 10:24:25 PM1/24/18
to JanusGraph users

Hi Gyuszi,

I also come across similar problem. When using Janusgraph connect with kerberos secured hbase, I encounter lots of problems, and learn a lot from your experience. 
Now it seems works. There are some steps I walked, Hope to give some help:

1. Manually copy hadoop-hdfs-2.7.2.jar file to the /opt/janusgraph-0.2.0-hadoop2/lib directory,and change the file privilege using:
chmod 664 lib/hadoop-hdfs-2.7.2.jar

2. Copy files and directorys in both hadoop/conf and hdfs/conf to /opt/janusgraph-0.2.0-hadoop2/conf;

3. config /opt/janusgraph-0.2.0-hadoop2/conf/your-janusgraph.properties as follows:
cache.db-cache-time= 180000
cache
.db-cache-size= 0.5
gremlin
.graph=org.janusgraph.core.JanusGraphFactory
cache
.db-cache-clean-wait= 20
storage
.hbase.table=janusgraph_default_autodeploy
storage
.hostname= YOUR_HBASE_HOSTNAME
cache
.db-cache= true
storage
.backend=hbase
storage
.hbase.ext.hbase.zookeeper.quorum= YOUR-ZOOKEEPER-HOSTNAME
storage
.hbase.ext.zookeeper.znode.parent= /hbase-secure
storage.hbase.ext.hbase.zookeeper.property.clientPort= 2181
storage.hbase.ext.hadoop.security.authentication= kerberos
storage.hbase.ext.hadoop.security.authorization= true
storage.hbase.ext.hbase.security.authentication= kerberos
storage.hbase.ext.hbase.security.authorization= true
java.security.krb5.conf=/
etc/krb5.conf


finally, it works:
# bin/gremlin.sh
gremlin
> graph = JanusGraphFactory.open('conf/your-janusgraph.properties')
gremlin
> g = graph.traversal()
gremlin
> g.addV().property('name','test')
==>v[4296]
gremlin
> g.tx().commit()
==>null

Besides, in my occasion, the janusgraph, hbase, hadoop were deployed in the same Linux server. 
Kind regards,
Tingna Liu.

在 2018年1月24日星期三 UTC+8下午6:23:14,Gyuszi Kovács写道:

Gyuszi Kovács

unread,
Jan 25, 2018, 7:02:12 AM1/25/18
to JanusGraph users
Hi Tingna,

Thank you for summing it all up! I wanted to write it up and post it here myself, but you beat me to it :) I do have one remark regarding the second step - I think Marcs advice is better, that is, to export the location of the config files into the CLASSPATH variable. This way in case something changes in the original config files you don't have to copy them again into the JanusGraph conf directory.

Kind regards,

Gyuszi
Reply all
Reply to author
Forward
0 new messages