Getting Permission Denied error while writing to hdfs though it has permission.

395 צפיות
מעבר להודעה הראשונה שלא נקראה

Krishna Doddi

לא נקראה,
1 באפר׳ 2015, 15:08:051.4.2015
עד gobbli...@googlegroups.com
Hi,

in the config file I set this property

fs.uri=hdfs://$HOST_NAME:8020/appdata/ce_project/gobblin

and executed ./bin/gobblin-mapreduce.sh  --conf ../wikipedia/wikipedia.pull  --fs hdfs://$HOST_NAME:8020/appdata/ce_project/gobblin
I am seeing this error. It looks like it trying to write at "/". We changed the permissions on "/". Still able to see the same issue. Any idea?



ERROR [AbstractJobLauncher] Failed to acquire job lock for job PullFromWikipedia: org.apache.hadoop.security.AccessControlExcept                                                                            ion: Permission denied: user=kdoddi, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.ja                                                                            va:257)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:216)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java                                                                            :145)
        at org.apache.sentry.hdfs.SentryAuthorizationProvider.checkPermission(SentryAuthorizationProvider.java:174)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6286)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6268)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6220)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2627)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2545)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2430)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:551)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClie                                                                            ntProtocol.java:108)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSid                                                                            eTranslatorPB.java:388)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(Client                                                                            NamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

org.apache.hadoop.security.AccessControlException: Permission denied: user=kdoddi, access=WRITE, inode="/":hdfs:supergroup:drwxr                                                                            wxr-x
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.ja                                                                            va:257)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:216)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java                                                                            :145)
        at org.apache.sentry.hdfs.SentryAuthorizationProvider.checkPermission(SentryAuthorizationProvider.java:174)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6286)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6268)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6220)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2627)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2545)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2430)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:551)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClie                                                                            ntProtocol.java:108)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSid                                                                            eTranslatorPB.java:388)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(Client                                                                            NamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
        at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1616)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1488)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1413)
        at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:387)
        at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:383)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:383)
        at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:327)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:849)
        at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1149)
        at gobblin.runtime.FileBasedJobLock.tryLock(FileBasedJobLock.java:66)
        at gobblin.runtime.AbstractJobLauncher.tryLockJob(AbstractJobLauncher.java:420)
        at gobblin.runtime.AbstractJobLauncher.launchJob(AbstractJobLauncher.java:166)
        at gobblin.runtime.mapreduce.CliMRJobLauncher.run(CliMRJobLauncher.java:59)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at gobblin.runtime.mapreduce.CliMRJobLauncher.main(CliMRJobLauncher.java:128)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=kdo                                                                            ddi, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.ja                                                                            va:257)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:216)
        at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java                                                                            :145)
        at org.apache.sentry.hdfs.SentryAuthorizationProvider.checkPermission(SentryAuthorizationProvider.java:174)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6286)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6268)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6220)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2627)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2545)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2430)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:551)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClie                                                                            ntProtocol.java:108)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSid                                                                            eTranslatorPB.java:388)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(Client                                                                            NamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

        at org.apache.hadoop.ipc.Client.call(Client.java:1411)
        at org.apache.hadoop.ipc.Client.call(Client.java:1364)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at $Proxy14.create(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at $Proxy14.create(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:2                                                                            64)
        at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1612)
        ... 22 more
Failed to launch the job due to the following exception:
gobblin.runtime.JobException: Previous instance of job PullFromWikipedia is still running, skipping this scheduled run


Sahil Takiar

לא נקראה,
2 באפר׳ 2015, 1:57:052.4.2015
עד Krishna Doddi,gobbli...@googlegroups.com
Hey Krishna,

As a workaround, can you try setting “job.lock.enabled=false” in you .pull file and see if that works?

Can you also let me know what you are setting the environment variable “GOBBLIN_WORK_DIR” to?

—Sahil

--
You received this message because you are subscribed to the Google Groups "gobblin-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gobblin-user...@googlegroups.com.
To post to this group, send email to gobbli...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/gobblin-users/822e74a0-cb63-4a0f-b59c-a7b0d6f157c0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Yinan Li

לא נקראה,
2 באפר׳ 2015, 18:11:082.4.2015
עד gobbli...@googlegroups.com
Can you check if you have config property job.lock.dir in your job config file?

Yinan

Krishna Doddi

לא נקראה,
9 באפר׳ 2015, 16:51:489.4.2015
עד gobbli...@googlegroups.com
yes. it is set

job.lock.dir=${env:GOBBLIN_WORK_DIR}/locks

Yinan Li

לא נקראה,
9 באפר׳ 2015, 16:58:509.4.2015
עד gobbli...@googlegroups.com
OK, the reason why it was trying to write to / is GOBBLIN_WORK_DIR is not set. Please use the following option of gobblin-mapreduce.sh:

--workdir <work dir>

The work dir is on HDFS since you specify --fs to point to an HDFS instance.

Yinan

Krishna Doddi

לא נקראה,
12 באפר׳ 2015, 2:48:1812.4.2015
עד gobbli...@googlegroups.com
Thanks Li. This is working and I am able to see a working directory on HDFS. but I am getting a different exception now

[kdoddi@hdic02gw01 gobblin-dist]$  ./bin/gobblin-mapreduce.sh  --conf ../wikipedia/wikipedia.pull  --fs /appdata/ce_project/gobblin/fs --workdir /appdata/ce_project/gobblin/workdir
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/kdoddi/gobblin-master/gobblin-dist/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p663.344/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
WARN [MRJobLauncher] Job working directory already exists for job PullFromWikipedia
ERROR [AbstractJobLauncher] Failed to launch and run job job_PullFromWikipedia_1428821156497: java.lang.StackOverflowError
java.lang.StackOverflowError
        at java.util.AbstractList$Itr.<init>(AbstractList.java:318)
        at java.util.AbstractList$Itr.<init>(AbstractList.java:318)
        at java.util.AbstractList.iterator(AbstractList.java:273)
        at org.apache.hadoop.conf.Configuration.handleDeprecation(Configuration.java:587)
        at org.apache.hadoop.conf.Configuration.get(Configuration.java:1075)
        at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:175)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)

Sahil Takiar

לא נקראה,
13 באפר׳ 2015, 14:32:2513.4.2015
עד Krishna Doddi,gobbli...@googlegroups.com
A possibly related JIRA I found: https://issues.apache.org/jira/browse/HADOOP-9069

Still looking into it.

—Sahil

From: Krishna Doddi <doddi....@gmail.com>
Date: Saturday, April 11, 2015 at 11:48 PM
To: "gobbli...@googlegroups.com" <gobbli...@googlegroups.com>
--
You received this message because you are subscribed to the Google Groups "gobblin-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gobblin-user...@googlegroups.com.
To post to this group, send email to gobbli...@googlegroups.com.

Sahil Takiar

לא נקראה,
13 באפר׳ 2015, 14:43:3513.4.2015
עד Sahil Takiar,Krishna Doddi,gobbli...@googlegroups.com
Hey Krishna,

Can you tell me what value for “fs.uri” you are setting. According to some JIRAs I found about this, it looks like unless you specify a scheme in the URI, then certain versions of Hadoop will throw this StackOverflowError.

URI Scheme Info: http://en.wikipedia.org/wiki/URI_scheme (Usually, when writing to HDFS, you set this to hdfs://)

JIRAs:


—Sahil

Krishna Doddi

לא נקראה,
13 באפר׳ 2015, 16:20:0313.4.2015
עד gobbli...@googlegroups.com,sta...@linkedin.com,doddi....@gmail.com
Hi Sahil,

Initially this is how I ran with complete uri.
 ./bin/gobblin-mapreduce.sh  --conf ../wikipedia/wikipedia.pull  --fs hdfs://$servername:8020/appdata/ce_project/gobblin/fs --workdir hdfs://$servername
:8020/appdata/ce_project/gobblin/

I got this exception 

ERROR [AbstractJobLauncher] Failed to launch and run job job_PullFromWikipedia_1                                                      428956112251: org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFi                                                      leSystem for scheme: null
org.apache.hadoop.fs.UnsupportedFileSystemException: No AbstractFileSystem for s                                                      cheme: null
        at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFile                                                      System.java:153)
        at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:2                                                      41)
        at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:333)
        at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:330)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma                                                      tion.java:1614)


then I changed it 
 ./bin/gobblin-mapreduce.sh  --conf ../wikipedia/wikipedia.pull  --fs /appdata/ce_project/gobblin/fs --workdir /appdata/ce_project/gobblin/

I am able to see working directory on hdfs but I see this exception now


SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p663.344/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
ERROR [AbstractJobLauncher] Failed to launch and run job job_PullFromWikipedia_1428956315709: java.lang.StackOverflowError
java.lang.StackOverflowError
        at java.util.AbstractList$Itr.<init>(AbstractList.java:318)
        at java.util.AbstractList$Itr.<init>(AbstractList.java:318)
        at java.util.AbstractList.iterator(AbstractList.java:273)
        at org.apache.hadoop.conf.Configuration.handleDeprecation(Configuration.java:587)

Sahil Takiar

לא נקראה,
13 באפר׳ 2015, 16:49:3813.4.2015
עד Krishna Doddi,gobbli...@googlegroups.com
For the first exception, looks like the FileSystem is looking for the config “fs.AbstractFileSystem.hdfs.impl” in it’s Configuration object. Can you check your Hadoop setup and see if this parameter is set anywhere? It should be part of the core-default.xml file: https://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/core-default.xml

Also, what version of Hadoop is your cluster running?

Krishna Doddi

לא נקראה,
13 באפר׳ 2015, 20:14:0013.4.2015
עד gobbli...@googlegroups.com,doddi....@gmail.com
version 2.4,  cloudera 5.3. 
I did not see those parameters in core-site.xml. I do not have permission to updte the core-site.xml. I have to work with sys admin to do that.
...

Sahil Takiar

לא נקראה,
21 באפר׳ 2015, 15:18:4021.4.2015
עד Krishna Doddi,gobbli...@googlegroups.com
Hey Krishna,

Is there any Cloudera support we can engage for this issue? Without going through the Hadoop env setup it will be hard for us to debug the problem.

How does your team currently run Hadoop jobs, is it mostly Pig or Hive processes? Are there any custom built Java processes that run on the Hadoop cluster? I'm basically wondering if there is a specific issue in Gobblin, or if some special configuration is required to run custom Java processes. One way to find out would be to run the Cloudera Hello World Example: http://www.cloudera.com/content/cloudera/en/documentation/hadoop-tutorial/CDH5/Hadoop-Tutorial.html


Trying to setup and run the Hello World job on Hadoop, will definitely better help us identify where the actual issue is.

Thanks

--Sahil

From: gobbli...@googlegroups.com [gobbli...@googlegroups.com] on behalf of Krishna Doddi [doddi....@gmail.com]
Sent: Monday, April 13, 2015 5:14 PM
To: gobbli...@googlegroups.com
Cc: doddi....@gmail.com
--
You received this message because you are subscribed to the Google Groups "gobblin-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gobblin-user...@googlegroups.com.
To post to this group, send email to gobbli...@googlegroups.com.

Sahil Takiar

לא נקראה,
6 בינו׳ 2016, 18:42:356.1.2016
עד Krishna Doddi,gobblin-users
השב לכולם
השב למחבר
העבר לנמענים
0 הודעות חדשות