Erlang client code?

10 views
Skip to first unread message

Lloyd Prentice

unread,
May 22, 2019, 6:52:37 PM5/22/19
to LeoProject.LeoFS
Hello,

Since LeoFS is written in Erlang, I'm surprised that there is no Erlang client. Or is there?

If not, please suggest how to write one.

Thanks,

LRP

yoshiyuki kanno

unread,
May 22, 2019, 8:07:12 PM5/22/19
to Lloyd Prentice, LeoProject.LeoFS
Hi,

Since LeoFS is compatible with AWS S3, you can use S3 clients written
in Erlang like https://github.com/erlcloud/erlcloud.

There is a sample code using erlcloud for the integration test of LeoFS.
https://github.com/leo-project/leofs_client_tests/blob/develop/erlcloud/LeoFSTest.erl

Hope you'd find it useful.

Best,
Kanno.

2019年5月23日(木) 7:52 Lloyd Prentice <ll...@writersglen.com>:
> --
> You received this message because you are subscribed to the Google Groups "LeoProject.LeoFS" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to leoproject_leo...@googlegroups.com.
> To post to this group, send email to leoproje...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/leoproject_leofs/4b2a49f6-f156-42fc-8830-dfe337fd6b67%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
Yoshiyuki Kanno
LeoFS Committer(http://leo-project.net/leofs/index.html)

yoshiyuki kanno

unread,
May 23, 2019, 2:45:09 AM5/23/19
to Lloyd R. Prentice, LeoProject.LeoFS
Hi Lloyd,

> Since a given subscriber would only be working with one table at a time, and the tables are relatively small, could we create a LeoFS bucket when the subscriber first registers and store table data as serialized binary objects.

So that means there should be no races against any objects stored in a
LeoFS? (In other words, no multiple subscribers could access a same
object at a time) then it should work while achieving data redundancy,
back up, archiving and also keeping data integrity.

> Our hypothesis is that LeoFS would provide the redundancy, backup, and archiving that we need, as well as accommodate larger binary objects within acceptable latency bounds.

Yes you are right.

Needless to say, please do the tests/benchmarks with data/environment
as much similar to your production ones as possible before using LeoFS
on production :)

Feel free to contact me if you have any questions.

Best,
Kanno.

P,S. I added the LeoFS ML address to CC for sharing the knowledge with
our community.


2019年5月23日(木) 12:38 Lloyd R. Prentice <ll...@writersglen.com>:
>
> Hi Kanno,
>
> Many thanks for your prompt response.
>
> We’re developing a subscriber-based Erlang web app and have a, perhaps, wacky idea. Wonder if we can get your thoughts.
>
> Each subscriber will be provided a small number of CRUD services— say two to ten ets tables or equivalent. Most of these services will have very few records. We project average storage volume per subscriber of well less than 1MB with extremely low non-transactional traffic per subscriber. A subset of subscribers may also want to store binary objects such as images or large text files. And, some subscribers may want to drop out for a spell, but then rejoin and would appreciate picking up their data where they left off.
>
> If our value proposition is right, we could potentially attract tens to hundreds of thousands of subscribers.
>
> Of foremost concern to our subscribers, however, Is data protection and integrity. Given our business model, loss of data would quickly destroy us. Thus, we will need data redundancy, back up, and archiving.
>
> We’ve considered both Mnesia and Postgres for back end storage. But our wacky idea is this:
>
> Since a given subscriber would only be working with one table at a time, and the tables are relatively small, could we create a LeoFS bucket when the subscriber first registers and store table data as serialized binary objects. Thus, when a subscriber wishes to work with a given table we’d import the serialized object, convert it to an Erlang list, fire up an ets table, and insert the list of records. Before we close the session, we’d again serialize the table, save it LeoFS, and delete the ets table.
>
> Our hypothesis is that LeoFS would provide the redundancy, backup, and archiving that we need, as well as accommodate larger binary objects within acceptable latency bounds.
>
> Do you think this can work?
>
> Thanks again,
>
> Lloyd
>
> Sent from my iPad
Reply all
Reply to author
Forward
0 new messages