On Thursday, May 11, 2017 at 8:04:31 PM UTC-7, polymorph self wrote:
> On Monday, May 8, 2017 at 9:02:47 AM UTC-4,
kint...@gmail.com wrote:
> > Put some bash or forth examples here and I will show you how to make them work in prolog .
> > I think that would be a great way to learn prolog for both you and I .
> > I love bash , it's one of my favourite languages .
> > But I would most especially like to learn forth , so that is my preference .
> >
> > ~~kintalken~~
> >
> > P.S. Remember , after reading clocksin and mellish : if you use the '!' operator your ARE NOT making a logic-style program , you are making an imperativ-style program . Prolog is a great language for either purpose .
> >
> > 99% of the prolog code out there does not meet even the minimum requirements of logical consistency . It's a brave new world - even though prolog is 40+ years old , the creation of truly logical programs is still in it's infancy .
>
> so ! is bad ok mental note
> can prolog handle like big data warehosue stuff thats bigger than 1 node can do?
"big data warehouse" thinking is big business , top-down , monolithic thinking . The conceptiopon is that you have a massive supercomputer always actrive , a big monolith .
That is antiquated and outdated and ineffectual thinking , unless you are weblogic etc. and you want to sell that idea to big customers who think like that , i.e. banks .
What you should imagine is that your network is a hive of barely communicating parts . Each part should be able to completely shut-down and start back up again . You want to deploy a cluster to amazon - make your app small independnently functioning parts that start up and shut down . Do NOT START IMAGINING that the magic super-intelligence of your BIG-DATA SYSTEM is going to somehow maintain for you a "consistent state" across all the nodes of your network . Disabuse ypourself of the notion that there is any central intelligence , supervisor , coordinator , or overseer .
Here isa very simple outline , using app as an example .
Put in place standard load balancing with sticky sessions. Get some company that does this for you and sells it cheap , maybe CloudFlare can do it for you . Don'rt waste your time imploemnting crap like that yourself you'll have a 1% solution at 10x the cost .
Now , your application needs a database . That'
s right , databases are useful . You can use a nosql store if you want - whatever , you need 1 or more central places for data .
Now , your application .
Your application is going to work like this :
The load balancer delivers your back-end node a request with an unknown (new) session id . Your prolog instance is going to startup (or be waiting , a simple pool of available services , save you the .5 second startup time) .
The first thing your application does is restore the session state .
The session state is correspondent to the session id you get from the load balancer .
The load balancer probably stores it as a cookie in the users browser .
Now in this case , it's a new session , so you load that up .
If it is an old session , then you can load that from your database .
Later on , you can optimize that , by perhaps storing some of that session state in your users client browser .
But early optimization is bad optimization so leave that till the $$$ start flowing and you get to fuck around with optional stuff .
Now your prolog app starts up fresh , charges up with the session data , then parses the request - your app processes it , the response is sent back , your prolog app shuts down completely .
This is the right way to develop this - you are now motivated to make your app blistering fast start to finish . COmpare the "weblogic" solution - 15 minutes startup for a "node" of the monpolithic "cluster" is typical .
Need to scale suddenly because your app (or your customers app) is much more popular than imagined . Easy , just add more Amazon nodes (cheap) , they all run the same simple app(s) , you can deploy one in 5 seconds .
Need to scale suddenyl because your big-data i'm-a-god central-control-intelligence weblogic cluster is falling behind . Allocate 2 weeks and $20,000 .
> I heard something called pengines can allow prolog to use more than 1 box to > form a kind of cluster?
IMO , pengines are a waste of time .
Again ,you are not creating a "monolith" conatiner of some kind of sub-process melee . You want your app lean , mean , capable of being a fighting machine . People are attracted to this solution because they want to obtain (or want to BE) something "powerful" and "big" .
Now , remeber above : it was suggested that at the end of a request , you push the session state out to a database . At the begining , when receiving the response , you pull the session state from the database .
Consider this potential available via YAP :
---
save(+F,-OUT)
Saves an image of the current state of YAP in file F. From YAP4.1.3 onwards, YAP saved states are executable files in the Unix ports.
---
Replace "save session to database" in my diatribe above with "save prolog application + session data to session file " . Yap provides the whole infrastructure in one function call . The entire state of the process , the compiled code of your application , the data needed by your application , the data correspondent to the session : all of it get's saved into one executable file .
Next time you get a request for that session , launch your "saved prolog program +s state" -- immediately in service and ready to go .
Now, you have to be aware that the loadbalancer might not always be sending a session to the SAME backend . A certain backend might become temporaily nonresponsive for example . So you need to account for the fact that your session might end up needed by a different sub-worker-node than originally created it . If you use the "save everything about this prolog instance as a file" strategy , maybe you can find a simple file-sharing cluster management (there are lots) that distributes your "save file" to other nodes in the cluster . If your cluster get's big , you organize it into sub-clusters . For example 6 sub-worker-nodes per sub-node , 4 sub-node in your cluster , the load-balancer knows that it should stay in the same sub-node group when it needs to rollover and find a different sub-worker-node for a session (because the sub-worker-node it used for that session last time is nonavailable) .
That way you can share your session/executables amongst peers within the same sub-node , and you don't have to share your sessions amongst all your nodes .
So imagine this in action - you respond to your users request by starting up a binary application that is custom tailored and already contains their session information and your entire application in binary ready-to-execute form . Can you beat that for "efficiency" ? You can distribute your sessions and give your cluster complete fault-tolerance and very high resilience using a dumbass file-sharing protocol of which there are at least 1 dozen good ones available free for linux .
You can scale your application to meet demand by adding a new node , hooking it up to your file distribution system , and then telling your load-balancer about it . Your nodes can be super cheap commodity linux boxes available via Amazon etc. at a low hourly rate .
This description is not pie-in-the-sky fantasy . You can use Resin (
www.caucho.com) for example - it's got the whole loadbalancer , clustering etc all built in . It gives you extras like automatic compression of your responses and an excellent server side cache (the cache can save you lots of cpu and network time thus pay for the price of "pro" in no time , if you use it) . It's got a bunch of stuff you never think of till you use it too - like excellent url rewriting and redirecting facilities .
You can also hack together something more homegrown on a Manjaro linux box .
You need a web-server , a load-balancer , a database (probably you need a database , maybe not) , a file-share if you do the save-my-prolog-mind-and-data thing described above .
Keep your "application" minimal , sleek , and fast .
Make your application capable of quick startup and shutdown from the start and keep it that way .
> How do I concentrate on programming more logically? eevn with in prolog?
Read everything you can find by Markus Triska and ignore everything else .
> there are payoffs I imagine?
They payoff is you get to participate in the process , you get to become a logic programmer AND you get to be a (potentially very important) part of it's early development . "logic" programming is in the state of early development , hardly anyone does it , as evidenced by their ubiquiutous depondency upon the cut , and their inability to avoid the _if -> _then ; _disjoint construct (which is non-logical)
It's a dubious payoff if your intention is to produce a lot of good code that accomplishes something practical . Prolog is not a (currently) a good choice for that . Programming in Prolog is a painstalking rearrangement of your entire brain structure and you are lucky to get 30 usable lines of code in a day . You get no good examples to look out and no evidence that Prolog as a practical matter can achieve anything more than what BASIC can . Let's admit it - you can get a lot of great stuff done via some very nice apis and libraries using groovy , java , C# , nodejs , dart , typescript . You can have confidence you will succeed and there are lots of examplez and evidence that such a solution can be a wise one that doesn't let you down .
In my opinion , Prolog the intelligence has determined for us the current state of the game. If you use the ! and the -> and you keep doing the same stuff that's a totally mimicry of what was done in 1983 , you get nothing useful and your code is garbage and you hardly make any of it .
If you pursue the "logic" then you end up in the same place , hard to be practical or efficient - but it is a VERY FUN GAME to participate in .
Looking forward to seeing some FORTH code ...
~~ kintalken ~~