Can't seem to publish to S3

87 views
Skip to first unread message

Todd Sampson

unread,
Mar 9, 2016, 11:13:20 AM3/9/16
to aptly-discuss
Aptly publish to S3 quit working for me. Anybody else? Any recommendations?

Thanks!
Todd

Environment
aptly version: 0.9.7~dev
Ubuntu 14.04.4 LTS

Command
dfr@m9kadmin-qc:~$ aptly publish switch -skip-contents -architectures=i386,amd64,all trusty s3:m9krepo:m9k m9k-2016-01-20
Loading packages...
Generating metadata files and linking package files...
Locks up at this point
1 / 15 [========>--------------------------------------------------------------------------------------------------------------------] 6.67 % 0

Tried deleting publish and republishing but didn't help

s3cmd works fine
dfr@m9kadmin-qc:~$ s3cmd ls s3://m9krepo
                       DIR   s3://m9krepo/abc/
                       DIR   s3://m9krepo/ansible/
                       DIR   s3://m9krepo/m9k/
                       DIR   s3://m9krepo/trusty-backports/

TntDrive and S2Browser work fine


Config
dfr@m9kadmin-qc:~$ cat .aptly.conf 
{
  "rootDir": "/home/dfr/.aptly",
  "downloadConcurrency": 4,
  "downloadSpeedLimit": 0,
  "architectures": [],
  "dependencyFollowSuggests": false,
  "dependencyFollowRecommends": false,
  "dependencyFollowAllVariants": false,
  "dependencyFollowSource": false,
  "gpgDisableSign": false,
  "gpgDisableVerify": false,
  "downloadSourcePackages": false,
  "ppaDistributorID": "ubuntu",
  "ppaCodename": "",
  "S3PublishEndpoints": {
    "m9krepo": {
      "region": "us-east-1",
      "bucket": "m9krepo",
      "awsAccessKeyID": "****************",
      "awsSecretAccessKey": "**********************************",
      "prefix": "",
      "acl": "public-read",
      "storageClass": "",
      "encryptionMethod": "",
      "plusWorkaround": false
    }
  }
}

Andrey Smirnov

unread,
Mar 10, 2016, 4:36:15 PM3/10/16
to Todd Sampson, aptly-discuss
Todd, at that point aptly should be doing file listing to quickly check if it needs to upload files later on or not. It might take a lot of time if bucket is huge, but that is there to avoid many HEAD calls (which are slower in the end). Does it really lock up or continues sending S3 requests? 

Let me see if I could add debugging output for S3 as config option.

--
You received this message because you are subscribed to the Google Groups "aptly-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to aptly-discus...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages