Smalltalk's Greatest Performance Issue

177 views
Skip to first unread message

Peter Ode

unread,
Jun 6, 2019, 3:28:21 PM6/6/19
to VA Smalltalk
I applaud the VA Smalltalk team for the recent improvement to JIT performance as noted in another post.  
But, I see the single biggest issue for Smalltalk is the lack of true support for current multi-CPU hardware.

The greatest speed improvement for Smalltalk would be a real, multi-thread (multi-CPU) capable VM.

Unfortunately, I'm forced to use .NET and C# because my server-side programs MUST utilize all of the computing resources available. Others use Java for the same reasons.

As much as I use Smalltalk where ever I can, many modern programs require that ubiquitous multi-CPU hardware devices (from mobile devices to laptops and servers) be fully utilized. 

For many of my Smalltalk based web apps (e.g. ecommerce systems), I spin up 1 CPU virtual servers — because VAST can't utilize more than 1 processor effectively.

Does Instantiations have any plans to create a multi-threading capable VM for Smalltalk?
Or, create a Smalltalk on top of .NET that can utilize that platform's excellent multi-threading capabilities?

In my mind, this is the single greatest issue that's holding up Smalltalk's capabilities for real-world deployments and subsequent increased visibility/success in the marketplace.

Seth Berman

unread,
Jun 6, 2019, 4:28:44 PM6/6/19
to VA Smalltalk
Greetings Peter,

Thank you for bringing up this issue and I am happy to give you my thoughts on it.
I would search this forum for "Multiple threads" and you'll see some more content that might be interesting as this has been brought up a few times.

"because VAST can't utilize more than 1 processor effectively"
The first part I must do is make a small clarification to what your saying.  I know what your real meaning is though, so I'm not calling you incorrect.
VA Smalltalk can, in fact, use multiple processors concurrently (and effectively) to do work and has done so for years.
This is done with Asynchronous call-outs (aka Threaded FFI) and is one of the more complicated portions of the VM that allows for true concurrency.
I have used our crypto libraries to get 6 or 7 threads of encryption streams going at once and it works great.
Caveat: It has to be callouts to C functions.  No doubt a point of contention, but one that must be brought up for others.

The term multi-thread (multi-CPU) capable VM could also mean usage of multiple threads within the VM...like the Garbage collector for example.
This would in-fact produce great performance benefits, but again...I'm going to guess this is not what you mean.

What you probably mean is, will VA Smalltalk be able to run multiple instances of the interpreter (with Smalltalk process context) using native threads instead of green threads.
Or, will each Smalltalk process be mapped to a native thread.

If so, I would say the answer is no...mostly by design.
I've commented on this before in other posts that as a java developer in the past, my job was to go around and fix performance and concurrency issues that other teams in the organization
created.  Both issues were due to misuse and misunderstanding of concurrency, and languages like Java providing too low level an abstraction as "Thread".  java.util.concurrent helped...but that came later...and got overused.
Other teams would use AtomicInteger in place of every 'int' thinking this made them thread-safe and the performance was horrible.  But all this was amortized across the entire program...so it was always a head-shaking chore to fix.
So I don't consider their approach the pinnacle in this area.

While libraries mature and more keywords (i.e. async in C#) show up to tackle these problems...I think there are other ways to solve this with Smalltalk than attempting to introduce native threading where it simply doesn't belong.
I've also mentioned before that even if we got native threading in Smalltalk, there would be a whole new class of support cases for us to deal with...ones that are much harder to solve and at more cost to us.

Does Instantiations have any plans to create a multi-threading capable VM for Smalltalk?
- Sure, for the garbage collector we do.

Or, create a Smalltalk on top of .NET that can utilize that platform's excellent multi-threading capabilities?
- Unlikely. 1/2 of me is an engineer but the other 1/2 is a business owner and I feel that is a very large risk considering the effort and cost we would have to put into that.
  Unless I knew there was a market for that, I would not put us on that path

"In my mind, this is the single greatest issue that's holding up Smalltalk's capabilities for real-world deployments and subsequent increased visibility/success in the marketplace."
- I'm not convinced of this but I do think its important to be able to have Smalltalk images that can efficiently communicate with one another and share objects.  We have that too
with out SST framework, but it does require a great deal of expertise...more than I would like.

- Seth

Peter Ode

unread,
Jun 6, 2019, 5:31:01 PM6/6/19
to va-sma...@googlegroups.com
Hello Seth,

I appreciate the thoughtful insights and comments on the multi-threading topic — especially from the perspective of the two hats that you wear: software engineer and business stakeholder. 

My use cases, that inspired the original post, have to do with designing, building and deploying sophisticated web apps. Most of my Smalltalk programs run server-side where utilizing multiple CPUs is crucial. 

There are many strategies that can be used to utilize multiple CPUs in an effort to minimize the choke point that is the VAST VM's single threading model: (a) push processing out to the client-side by creating JavaScript programs that run in the user's browser; (b) use a job-queue server-side to distribute certain jobs to other CPUs and programs or to prioritize jobs within the running image; (c) utilize multiple Smalltalk images running in a load-distributed setup ... 

For (a) JavaScript is an ugly language and a necessary evil. We're experimenting with PharoJS, hoping that we can write Smalltalk code that transpiles to JavaScript.

We've implemented a job-queue, for (b), to execute long-running jobs as a lower priority (e.g. generating HTML pages for thousands of products, importing/exporting updates from/to accounting systems, email broadcasts, credit card authorizations...) so user interactions can have top priority and timely responses. 

I understand the potential for increasing complexity and resulting support costs if VAST was to add true multi-CPU capabilities to the development roadmap.  Maybe, those developers/customers that need such features could have the option to subscribe to an elevated support level that's dedicated to multi-CPU issues. I'm not sure what makes sense from a business viewpoint for Smalltalk vendors. 

This might be an area where the open-source model shines. I believe Pharo is going down a multi-threaded path. 

As for a Smalltalk on .NET, Essence# is an open-source, ANSI compliant Smalltalk implemented on top of .NET.
https://archive.codeplex.com/?p=essencesharp  and: 
Currently, developers can write Smalltalk in Pharo or VisualWorks then output the code to run on .NET. This is for headless Smalltalk. The GUI side is implemented using typical .NET based facilities. Essence# is lacking the Smalltalk IDE but is a solid foundation for someone that wants to add the GUI support and eventually the IDE. 

A highlight, Essence# has some multi-threading and concurrency support based on the underlying .NET VM. 

Smalltalk has been my secret weapon for developer productivity and winning coding contracts. But, too often, I cringe at having to use other languages and development environments when Smalltalk doesn't have the capabilities I need. 

No doubt, a multi-CPU capable Smalltalk will eventually surface. Sooner-than-later is my hope. 

--
You received this message because you are subscribed to the Google Groups "VA Smalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to va-smalltalk+unsubscribe@googlegroups.com.
To post to this group, send email to va-sma...@googlegroups.com.
Visit this group at https://groups.google.com/group/va-smalltalk.
To view this discussion on the web visit https://groups.google.com/d/msgid/va-smalltalk/2da28011-8649-41c4-b97d-95422b392156%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Mariano Martinez Peck

unread,
Jun 6, 2019, 9:13:51 PM6/6/19
to VA Smalltalk
Hi Peter, 

Thanks for the applauds. I record that the VM team did a terrific work. 

And thanks for bringing this discussion. To me it's always very positive to discuss about these things. 

Please, see some answers below inlined. Note that this is just my personal opinion on the matter. 
 

There are many strategies that can be used to utilize multiple CPUs in an effort to minimize the choke point that is the VAST VM's single threading model: (a) push processing out to the client-side by creating JavaScript programs that run in the user's browser; (b) use a job-queue server-side to distribute certain jobs to other CPUs and programs or to prioritize jobs within the running image; (c) utilize multiple Smalltalk images running in a load-distributed setup ... 


I have been working with web development too over the last past years and with many Smalltalk dialects. And I have always gone with c). I find it easy and scalable. But more below. 
 
For (a) JavaScript is an ugly language and a necessary evil. We're experimenting with PharoJS, hoping that we can write Smalltalk code that transpiles to JavaScript.


For that you may also to check Amber Smalltalk [0]


This might be an area where the open-source model shines.

I don't think so. At least not for the Smalltalk community and if we are talking about production ready implementation. Doing a multi-core VM is a lot of effort and would imply breaking and re-doing a lot of things. I have been following multi-core VMs since a long time. I watched Hydra VM [1], RoarVM [2], and probably many more I am forgetting. But where are they now? Are they alive and maintained? Were them production ready or even integrated in the main dialect? I doubt it. 

You must distinguish a research experiment from a production ready product. Having a stable, tested and maintainable multi-core VM is a huge project. 
For my PhD, I did Marea, a project for application level virtual memory. It was a nice experiment, I got a good amount of papers accepted, etc. But it was an experiment. An idea. A prototype. Of course, it was never integrated in Pharo and now is dead. Probably just like the multi-core experiments above. 

I believe Pharo is going down a multi-threaded path. 


Do you recall where did you read that? Because I am really curious and I probably missed it.  I honestly don't see any Smalltalk community being able to address that. I will give you a simple example. I started a simple database driver called SqueakDBX in 2008 (which we then moved to Pharo). It was a simple FFI wrapper of a C library. The FFI calls would BLOCK the whole VM while the C function was being executed. Can you imagine that for a database driver? All the other Smalltalk process are on hold (yes, even those that should attend for other web requests!)??. I (and many others) have been wanting "just a async FFI". Eleven years after and it is still not there (at least to my knowledge). 

Did I do it? No. Did I put 200k USD for someone to do it? No. So I can't complain and I am not complaining. I am being realistic. 

So..how would you realistically expect multicore VM? 

 
As for a Smalltalk on .NET, Essence# is an open-source, ANSI compliant Smalltalk implemented on top of .NET.
Currently, developers can write Smalltalk in Pharo or VisualWorks then output the code to run on .NET. This is for headless Smalltalk. The GUI side is implemented using typical .NET based facilities. Essence# is lacking the Smalltalk IDE but is a solid foundation for someone that wants to add the GUI support and eventually the IDE. 
 
A highlight, Essence# has some multi-threading and concurrency support based on the underlying .NET VM. 


Thanks for the link. I wasn't aware of that. I was aware of Readline Smalltalk [3]
 
Smalltalk has been my secret weapon for developer productivity and winning coding contracts. But, too often, I cringe at having to use other languages and development environments when Smalltalk doesn't have the capabilities I need. 

No doubt, a multi-CPU capable Smalltalk will eventually surface. Sooner-than-later is my hope. 


As a Smalltalk lover I also hope and cross my fingers for that. I just don't see it coming. I hope I am wrong. 

I would like to comment some last words about your original option c). I was quite happy with that approach but it was still a pain to guarantee certain availability, scale to more nodes, etc etc. Recently, I have been working a lot with Linux containers / Docker (and soon Kubernetes / OpenShift) and my opinion is that now option c) is MUCH easier. It's incredible how easy you can setup a network of X nodes, each node having Y cores and running Z Smalltalk images. You have a very good optimization of the resources. You can read all my posts I wrote about that [4] [5] [6].

Finally, I am seeing an architectural change in recent applications. Looks like its moving from 1 (or a few) super huge server of 3GHz processor with 32 cores and attending huge requests, to instead have *lots* of smaller "nodes" with less CPU frequency, attending smaller requests. You probably heard about #serverless , #microservices, and all that buzz-wording. That's what I am talking about. 

So even if it would imply a lot of money, I am not sure that is the place where the technology is going...

Again, this is all my thoughts. Hope they are not either good or bad.

Best, 




--
Mariano Martinez Peck
Software Engineer, Instantiations Inc.

Andres Valloud

unread,
Jun 7, 2019, 12:07:34 AM6/7/19
to va-sma...@googlegroups.com
I really like the attitude of clearly distinguishing between an idea for something, and a finished product.  Both have their own merit, as well as their use cases.

With that said, I also think it's worthwhile to examine the factors that prevent industrialized solutions from being finished.

--
You received this message because you are subscribed to the Google Groups "VA Smalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to va-smalltalk...@googlegroups.com.

To post to this group, send email to va-sma...@googlegroups.com.
Visit this group at https://groups.google.com/group/va-smalltalk.

Louis LaBrunda

unread,
Jun 7, 2019, 9:33:40 AM6/7/19
to VA Smalltalk
Hi Everybody,

I want to highlight a couple of things that may have gotten missed in this discussion.  Peter says that often "I spin up 1 CPU virtual servers", Seth explains that VA Smalltalk does make good use of multiple CPUs, just not in the way Peter asks for.  If people are using systems like "VMWare" and assigning only one CPU, they are crippling VA Smalltalk.  Hopefully they are not benching-marking that way.

On another side note, in the past, systems like VMWare would only dispatch virtual servers defined with more than one CPU when ALL the defined CPUs were available.  I don't know what effect that had on performance and throughput but I doubt it was good.

Lou



On Thursday, June 6, 2019 at 3:28:21 PM UTC-4, Peter Ode wrote:

Richard Sargent

unread,
Jun 7, 2019, 1:31:13 PM6/7/19
to VA Smalltalk
On Thursday, June 6, 2019 at 12:28:21 PM UTC-7, Peter Ode wrote:
I applaud the VA Smalltalk team for the recent improvement to JIT performance as noted in another post.  
But, I see the single biggest issue for Smalltalk is the lack of true support for current multi-CPU hardware.

Peter, I am not ignoring the other responses you've already received. In fact, my reply is partially based off what the others have said.

First, the only reason for having multiple real threads in a single image is when the processes need to share state. While it is possible to have your image be "the database" and save the image regularly so persist the data, that's not a reliable way to keep your data safe. (Unless the data is invariant and display only, of course.)

So, if you persist your application outside the image, I don't see the value of multiple real threads within the image, especially when measured against the costs of doing so. (I am specifically thinking of the problems caused by doing so, per Seth's response.)

Once you have externally persisted data, you can run as many processes as you like with a single deployed image and take maximum advantage of the hardware. One of GemTalk's customers does exactly this (but not with VA Smalltalk). They run something like 300 processes hosting a Seaside application with a GemStone/S database. One could easily do similar with VA Smalltalk and an SQL database (or any other kind of database).

Peter Ode

unread,
Jun 7, 2019, 3:57:13 PM6/7/19
to va-sma...@googlegroups.com
Hi Richard, I like your clarification about "the only reason for having multiple threads... is to share state..."  

And, yes, the state that I'm sharing is persistent within an object database — so I can deploy multiples of the same image, each accessing the database. This is the scenario (c) in my 2nd post utilizing a load distributor setup. This is certainly a proven and viable way to add capacity to handle increased web user traffic. But, it is still problematic for my use-case and business-case. Allow me to explain.

In the late 1990's and early 2000's, I ran my own server room at great expense. To save costs, eventually migrated to a co-location facility where I installed my servers in a rack and paid the monthly fees. Today, I'm able to utilize Virtual Private Servers (or "VPSs") available from DigitalOcean, Vultr, Amazon and others.  In each of the server hosting scenarios, costs tend to be proportionate to the number of CPUs, amount of RAM and Disk Space.  

We have been providing an ecommerce hosting service for wholesalers, distributors and manufacturers. Also, we have some retail web store customers. As my system design and architecture is ideally capable of supporting fast growing enterprises and multi-tenant configurations, the business-case is sensitive to costs for CPU, RAM and disk space. 

The multi-Smalltalk-image configuration with distribution of web traffic does work, but is too costly in terms of RAM use.  Each VAST image is 47 MB on disk but can exceed 100 MB in terms of RAM memory utilization. Launching additional images ramps up RAM requirements and associated costs.  Therefore, my desire for an image/VM that can utilize multiple CPUs and be more efficient in terms of RAM (and setup times for a distributed image scenario). 

In the early days, pre year 2000, we looked at several database options including Gemstone, Versant, Objectivity, Omnibase, Object-Relational frameworks and others. At that time, 32 bit CPUs limited the amount of RAM we could put into a server box and RAM was very costly. Gemstone was high on our list but utilized too much RAM for our scaling needs versus memory costs. We decided on a database and configuration that would maximize our memory resources, saving costs as our customer base grew. 

Today, we are still sensitive to the cost of server RAM.  Therefore, the ideal scenario is to have one image that is multi-CPU capable to enable scaling web traffic without being forced to launch multiple images. At this point, we find C# on .NET to be the most CPU and RAM efficient for web app deployment. I would much rather have a Smalltalk centric solution.

I've learned from each of the posts in this topic/thread. I'm grateful for the insights and feedback. Still, I believe a multi-CPU capable Smalltalk would be highly desirable for my own purposes and also to underpin large WordPress scale projects that might showcase Smalltalk on a bigger stage. In my opinion, lack of such multi-CPU capabilities removes Smalltalk from consideration for Facebook scale projects or those projects that plan to grow/scale quickly.

My perception is that VAST has many Fortune 2000 customers with defined workloads that are not as sensitive to server costs.  But, for Internet startups and small businesses that hope to grow quickly, the software systems must be able to scale cost efficiently.  It would be great to have Smalltalk's developer productivity AND .NET's scalability. Maybe someone will see the promise of Essence#, a Smalltalk on top of .NET, as a foundation for a great Smalltalk of the future. https://essencesharp.wordpress.com/

Seth Berman

unread,
Jun 7, 2019, 11:06:08 PM6/7/19
to VA Smalltalk
Hi Peter,

I can appreciate the demands that you're talking about for these systems and I think you do a good job in explaining that.
The point of yours I really like is talking about the scaling cost of resource utilization in todays environments.

I actually think this is a strength of VA Smalltalk, as it (aka Visualage) was designed to run trim and on embedded processors.  You mention "47MB on disk" and to me that is a huge image deployment.  We routinely do 1-3MB image deployments on tiny IoT devices that are running full Seaside web servers and SST object remoting.  So its actually well designed for cloud cost scenarios and we have lots of customers that use them in that context.

"My perception is that VAST has many Fortune 2000 customers with defined workloads that are not as sensitive to server costs."
- At this point we have all types of customers.  1, 5, 10, 25, 50+ dev sites that do all sorts of interesting and innovative things that have any number of costs they are sensitive too.  I'm not sure even I have a complete perception of that.
Some of it is traditional thick client, some is mainframe, some cloud, in areas of finance, insurance, automotive, control systems...its extremely horizontal.
I am aware of quite a few systems that would be very sensitive the the kinds of costs you speak of and I'm quite sure they go to great lengths to make sure their deployment images are well stripped down and keep track of memory utilization.

I think Mariano can talk more about it, but trim reduced runtime VA Smalltalk images on Docker is a fantastic server solution for dynamically scaling at cost and efficiency.

- Seth

John M McIntosh

unread,
Jun 7, 2019, 11:23:12 PM6/7/19
to va-sma...@googlegroups.com
On iOS 10 years back I did file mmap of old space in squeak with copy on write. Then anonymous mmap for new space. As iOS faults pages in only on reading, the actual memory footprint was quite small.  An iPhone 3g only had about 20mb of ram to work with. With careful Smalltalk crafting in a Pharo 0.x I could avoid a full CG and I removed all the memory scans due to icky code that looks for allInstances of a class.  As mmap could load at a particular address we could avoid pointer swizzling and any endian fixup. So page frames faulted based only on method execution. Alas I did not spend time to resolve the problem object locality, like work that Craig has done.

Smalltalk systems don’t need to be bloated 



....
John M. McIntosh. Corporate Smalltalk Consulting Ltd https://www.linkedin.com/in/smalltalk


Sent from ProtonMail Mobile
--
You received this message because you are subscribed to the Google Groups "VA Smalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to va-smalltalk...@googlegroups.com.
To post to this group, send email to va-sma...@googlegroups.com.
Visit this group at https://groups.google.com/group/va-smalltalk.
To view this discussion on the web visit https://groups.google.com/d/msgid/va-smalltalk/4f6f53ba-6ee0-41cb-8e84-27832e25fa11%40googlegroups.com.

Marten Feldtmann

unread,
Jun 8, 2019, 7:13:36 AM6/8/19
to VA Smalltalk
Lots of people have faced this problem in the past ... and people in the past have found solution/how to get around this problem.

Here are my PERSONAL impressions aroud this problem:

* VASmalltalk is able to all of the work in my soluton, but it may be ot wise to use Smalltalk everywhere in my solution
* asychronous calls are expensive (my experiences up to 8.6) in VASmalltalk. So it is wise to make calls only in cases, that this call is doing good, much work.
* heavy socket communication can take Smalltalk CPU power

MY PERSONAL solution therefore is/was (and I took it over to Gemstone):

* try to put all external communication into an external library with own threading

What does this mean ?
 
All my communication solutions to/from Smalltalk are done with 0MQ. 0MQ has its own working thread pool and so a large amount of CPU time is returns to Smalltalk for logic execution.

Other non-0MQ communications are handled via own server processes (e.g. my WebSocket server tasks are written in Python today).

This could also be done with http/http traffic, problems are still large file upload/downloads.

But this is my PERSONAL solution, not mainstream and not acceptable for one language-only developers





Mariano Martinez Peck

unread,
Jun 8, 2019, 12:05:36 PM6/8/19
to VA Smalltalk
Interesting too read everybody's approaches. 

For me, personally, I really love opción c) and now with Docker / Swarm / Kubernetes that's even much cooler. I can scale to X number of nodes and Y containers in seconds. I can guarantee availability and so on. I also managed to get small containers using Debian slim version. The whole container with my app was about 70MB.

This is not an excuse or justficiation, but 45MB image sounds huge. I have a Seaside app running in 3MB. Are you running the reduced runtime packaging? Have you ever pay attention on analyzing why such image size? 

One last comment is that if I were to invest my own money on improving CPU usage I wouldn't go with a multi-core VM but instead provide nice tools for accessing and taking benefits of GPU. I am seeing devices under 100USD with many many many cores of GPU. In Vastgoodies.com you can see there is a wrapper for OpenCL. That's something where I would indeed invest money... Having good support for OpenCL, Cuda, etc. 

I also don't give as much importance as you do to RAM. It's true RAM is much faster, but with today's SSD you get closer. So VirtualMemory is not that slow anymore. 

Anyway, I just wanted to add those personal opinions. 

Best, 

--
You received this message because you are subscribed to the Google Groups "VA Smalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to va-smalltalk...@googlegroups.com.
To post to this group, send email to va-sma...@googlegroups.com.
Visit this group at https://groups.google.com/group/va-smalltalk.

Marten Feldtmann

unread,
Jun 8, 2019, 12:48:12 PM6/8/19
to VA Smalltalk
I was not totally clear here: the communication from/to a VASmalltalk system is done via 0MQ only.

The WebSocket server I mentioned below does not do any heavy logic - its just a relay server (or protocol switcher: accept/maintain WebSocket connections and send the requests via 0MA to VASmalltalk/Gemstone. This is the request/answer 0MQ subsystem of all my VASmalltalk/Gemstone systems.

I use Python here because their WebSocket libraries are pretty stable, but Python is much slower than Smalltalk and I've noticed, that Python at this point can be a bottle-neck. So it may be replaced by a Mono-C# based system, or a Java based system - depending which ecosystem has the better WebSocket libraries.

Logging for example is done also using a 0MQ oriented subsystem - but here the system is completely outside of VASmalltalk/Gemstone. VASmalltalk delivers its application log messages to this subsystem. The log-subsystem is written in Python - easier to change/edit. One process within this subsystem is writing all events to files, one process is accepting messages and offers output channels for other processes to get access to the log messages.

The other 0MQ subsystem is the domain-event system. Domain events (user defined) are send (from Smalltalk) through this channel - other processes can react on these events - this kind of "process notificaton" seems to be much faster than e.g. Gemstone offers for their GEM communication.

Load-balancing ? Same answer here - use another pattern of the networking library 0MQ and you have load balancing among various VASmalltalk/Gemstone processes.

So at the end the running application may have 10-20 processes (in my Gemstone solutions). Most of them doing small jobs and they are reused in other projects and I use all the CPUs I have in my computer ...

And another advantage -- due to 0MQ so many different languages can be used here.

So for me: there is no performance problem with Smalltalk - its a problem how to build solutions with Smalltalk

John M McIntosh

unread,
Jun 8, 2019, 3:50:26 PM6/8/19
to va-sma...@googlegroups.com
On my current project we are doing https to a web service java jar. From the mobile app we ask the https server to upgrade us to a websocket protocol. After this is granted we switch our socket to talk using the MQTT protocol.  The JAR has the responsibility of creating a socket via a cluster manager to a smalltalk image that is listening for incoming MQTT traffic.  The JAR actually takes data we supply as part of the websocket escalation to do user credential validation and connect or reconnection. In general after setup it just wraps/unwraps the MQTT data stream using the websocket protocol. 



....
John M. McIntosh. Corporate Smalltalk Consulting Ltd https://www.linkedin.com/in/smalltalk


‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
--
You received this message because you are subscribed to the Google Groups "VA Smalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to va-smalltalk...@googlegroups.com.
To post to this group, send email to va-sma...@googlegroups.com.

Peter Ode

unread,
Jun 10, 2019, 10:08:48 AM6/10/19
to VA Smalltalk
With interest, I read several of the responses.

Indeed our image size is huge. It turns out I checked a server that we use for testing and that image has not been stripped down — it has all the development tools plus ODBMS support and memory cache of certain performance sensitive data ++ 

At this point, I'm most interested in the Docker scenario, especially if we can get each Docker instance down to 100 MB or less. 

In the long run, I would like to see a multi-CPU capable Smalltalk VM. Until that time, the Docker option will hopefully work out.

Mariano Martinez Peck

unread,
Jun 10, 2019, 10:21:32 AM6/10/19
to VA Smalltalk
Hi Peter, my answers below

On Mon, Jun 10, 2019 at 11:08 AM Peter Ode <pete...@gmail.com> wrote:
With interest, I read several of the responses.


Me too. I always like to learn others point of view. 
 
Indeed our image size is huge. It turns out I checked a server that we use for testing and that image has not been stripped down — it has all the development tools plus ODBMS support and memory cache of certain performance sensitive data ++ 


Yeah, that what it look like from outside...although it could have been a 40MB stripped out icx it didn't sound like the most likely scenario. 
 
At this point, I'm most interested in the Docker scenario, especially if we can get each Docker instance down to 100 MB or less. 

Me too. I already sent you all my blog posts and from there you can see the links to the examples in github. 
My docker images (with Debian Slim) where about 70MB with everything: glibc, openSSL, locales, VAST VM, and VAST image. So even with a 30MB more of image, you are still below 100MB docker image.  

In my posts you can see how to build a swarm of nodes and how to balance your seaside containers. 

BTW, I will also submit a talk about this for ESUG 2019 in Germany.  


In the long run, I would like to see a multi-CPU capable Smalltalk VM. Until that time, the Docker option will hopefully work out.



And let us know if you need some help with Docker. We are not experts, but we would like our customers to be able to use it. 

I would also like to experiment soon with Kubernetes and OpenShit but I couldn't find the time yet. If you give it a try, let us know ;)
 
Best, 

Mariano Martinez Peck

unread,
Jun 10, 2019, 10:32:31 AM6/10/19
to VA Smalltalk
Sorry, but one last comment to Peter. Looks like in your case the "hosting" costs is an important part of your equation. 
If this is the case, and you indeed like the Linux Containers approach, did you consider using a ARM cluster for hosting? its much cheaper than regular hosts. 
If you have to host yourself you have options like:


I would love to get one of those and then use k3OS (OS) nodes +  k3s  (lightweight version of Kubernetes). But you can do similar with Swarm. 

If you don't want to host it, then there are even cloud providers made out of ARM, like Amazon:


Cheers,

Peter Ode

unread,
Jun 11, 2019, 2:11:23 AM6/11/19
to VA Smalltalk
Mariano, thanks for the Docker help. I might want to bounce some ideas around or get your recommendations relating to Docker + VAST.

About the ARM clusters/hosting, that's out of my realm of expertise. I feel more comfortable with a more mainstream hardware/OS platform — so favor Ubuntu or Windows, depending on the use case.

I will look forward to your ESUG 2019 video/presentation.  Thanks again.
Reply all
Reply to author
Forward
0 new messages