AI/ML

262 views
Skip to first unread message

timepro timesheet

unread,
Aug 1, 2021, 5:38:28 AMAug 1
to
all this talk about AI/ML...

my 2 bits (could be my ignorance/lack of knowledge,understanding)
but i reckon there is no such thing as AI in the 'literal sense'.
the coder/programmer is the god.
the machine just cannot perform even the simplest task unless programmed to perform that.
without the software/code written, even the most sophisticated/advanced hardware cannot think on its own.
all the zettabytes of data are inert unless coded to perform any given task.
computers/machines just cannot think on it's own.
AM I WAY OFF?
if i am, would someone dumb AI down...

thanks

Ella Stern

unread,
Aug 1, 2021, 6:22:27 AMAug 1
to
Data Science includes Artificial Intelligence, and AI includes Machine Learning. ML has applications in natural language processing (google search since 2012, chatbots etc), voice recognition, sentiment recognition, image recognition (faces, defects in serial production), the so-called autonomous cars.

Doctors are using special database software for a long time, that supports them in diagnosing rare deseases (it’s a case of supervised learning and decision trees); banks are using special software as support for deciding about credits - also based on decision trees.

At kaggle.com if you open the “Courses” link you will find a series of introductory lessons targeted to outsiders, so to get familiar with the data cleaning, model building, training and verifying lingo.

Of course there is a lot of noise around the area, but there are many more value adding applications than those mentioned above. The latest trend is integrating these new possibilities into the human world as supportive technologies, not final decision makers.

timepro timesheet

unread,
Aug 1, 2021, 7:34:57 AMAug 1
to
ella, AI is there. it's a thing. that is not my contention.
tech titans/industries/states... are funding in billions.

my questioning is: no matter how much data in gathered,analysed/no matter the most advance machines/....
it's actually the coder/programmer that directs those systems what/how to perform any given task.
machines/hardware without the coded/programmed applications are just dumb metal boxes.
thus my poser: i reckon there is no such thing as AI in the 'literal sense'.
the coder is the god.
i mean, who decides (and eventually codes) how to analyse the zentabytes of data to reach a specific outcome.

facial recognition/dna testing/voice controlled appliances/robot hand performing operations/self driven vehicles....
isn't a software dictating each instruction (above mentioned) OR, IS ANY MACHINE CAPABLE (BY ITSELF) TO PERFORM A TASK.
and, what is an 'AI' application - isn't it just a software.


timepro timesheet

unread,
Aug 1, 2021, 8:34:09 AMAug 1
to
take this example:

during office/peak hours, the red light on the traffic signal has a longer duration than say...at midnight...
so, would the machine on itself reduce the duration after midnight or does it have to be specifically programmed to do so...

now, if a major sporting event/concert/...ends at midnight and all the cars drive out at the same time on the same road,
(scores of pedestrians are waiting on the kerb to cross road...)
would the traffic signal detect that (maybe from the street cameras) and 'suo motu' linger the red light longer than every other day...or this scenario too has to be coded/programmed in the app. the traffic signal is functioning on.

thanks for reading.

i would highly appreciate if anyone can enlighten me on what exactly is AI.
(more or less, i understand what it does..., but cannot fathom 'how')


Mel Smith

unread,
Aug 2, 2021, 10:23:56 AMAug 2
to
Interesting discussion !
I'd like both you and Ella to continue digging deeper ...
-Mel

Ella Stern

unread,
Aug 2, 2021, 11:48:34 AMAug 2
to
Well, this article:
https://www.nature.com/articles/d41586-019-00083-3
is trying to explain to non-mathematicians, that the "learnability problem" is undecidable, so we are free to build a world (axiom system) where it's true, or another world, where it's false.

Currently libraries like OpenCV are using raster images - these are identifying contours by processing 2D pixel arrays.
On the Internet I've found only some research papers about shape-based image recognition, which would use images stored as vectors and curves - there is a long way to go until getting really smart and reliable robots.

Mel Smith

unread,
Aug 2, 2021, 12:17:18 PMAug 2
to
On Monday, August 2, 2021 at 9:48:34 AM UTC-6, Ella Stern wrote:
> Well, this article:
> https://www.nature.com/articles/d41586-019-00083-3
> is trying to explain to non-mathematicians, that the "learnability problem" is undecidable, so we are free to build a world (axiom system) where it's true, or another world, where it's false.
>
> Currently libraries like OpenCV are using raster images - these are identifying contours by processing 2D pixel arrays.
> On the Internet I've found only some research papers about shape-based image recognition, which would use images stored as vectors and curves - there is a long way to go until getting really smart and reliable robots.

My M.Sc thesis was entitled "On the Detection of Edges in Pictures" University of Alberta 1973. So your notes above lit a small memory fire in my soul :)
And I remember using/analyzing extent edge detection techniques, and then developing my own --- many years ago.

Thanks for your note that sparks more memories.

- Mel


Ella Stern

unread,
Aug 3, 2021, 7:50:37 AMAug 3
to
:-) I'm happy when I'm delegated to do "student work" (choosing the tools and approach for solving a problem) - it never happens in bigger companies.

Dan

unread,
Aug 3, 2021, 2:51:00 PMAug 3
to

Il 01/08/2021 12:22, Ella Stern ha scritto:
> Data Science includes Artificial Intelligence, and AI includes
Machine Learning. ML has applications in natural language processing
(google search since 2012, chatbots etc), voice recognition, sentiment
recognition, image recognition (faces, defects in serial production),
the so-called autonomous cars.
>
We need in first place a shared definition of AI. There is a "common
sense" definition and some more rigorous ones.
When a non-specialist talks about AI, it probably thinks of robots and
computers as Hal 9000 that SF popularized, truly intelligent machines.

That's far away from the actual AI. A nice definition I found is "AI is
the capability to do things that, if made from a human being, we could
define as intelligent."
Play chess at high level has been considered for a long time as reserved
to human intelligence. Now no man can win against a chess program, if
run on a modern PC (Stockfish is credited 3400 ELO points, where the
world champion has less than 3000 points).

So, AI now is focused on some partial achievements, as image
recognition, OCR etc.
But the true question is: can ever a machine think as a human being?
That question has many implications, but short in short the answer is "no".
As per the OP, the programmer is God, but when you create a program that
is very complex, non-deterministic behaviors can take place, things that
the programmer cannot calculate or anticipate exactly. Think of Life,
the divertissement created by Conway. There is no way to calculate the
state of the simulated creatures if not let the program run. And some
unpredictable configurations can show side effects,as the drone launcher
etc.
So the programmer can devise the mechanism, but cannot know exactly what
the mechanism will produce in terms of "emergent qualities".
A true interesting matter, even from a philosophical point of view.

Dan


timepro timesheet

unread,
Aug 4, 2021, 1:37:26 AMAug 4
to
dan, your point:
-'As per the OP, the programmer is God, but when you create a program that
is very complex, non-deterministic behaviors can take place, things that
the programmer cannot calculate or anticipate exactly...'-
:but, again 'cannot calculate or...' was coded/programmed by THAT coder, albeit inadvertently.
moreover, after that erratic/wrong/unpredictable... functioning, cannot that coder rectify it so that the same
'unpredictability' does not occur again...? or would the machine (nexttime) preempt that occurrence and correct that?

-'But the true question is: can ever a machine think as a human being?'-
:shouldn't the emphatic reply be 'hell no!'

-'but cannot know exactly what the mechanism will produce in terms of "emergent qualities...."'-
:maybe not at the time of coding, but certainly after execution can analyse and modify to produce the 'emergent qualities' as desired...

ok, if it was down to a binary decision: 'is the programmer god Y/N'
what would you say?
(that programmer may/may not commit blunders in coding, resulting in unanticipated activity - because of the 'very' code written by 'that god...i mean coder')

timepro timesheet

unread,
Aug 4, 2021, 1:52:23 AMAug 4
to
actually mel,
ella (and most others) are just way ahead than me in their understanding/knowledge of tech/computers/science... and their functioning...
i am just not in their league...
but, am very keen to augment my knowledge by interacting/studying...


this may seem over-simplistic:

in gmail, if you type PFA in the subject/matter,
and you hit 'send', forgetting/without the attachment, gmail prompts to add the attachment.
is this sort of AI/ML or just plain coding.
.
if a family(with children) is streaming a movie on a smart tv,
and some 'X' rated content is about to be streamed on the screen...
would that smart TV (networked), sense presence of children (from the tv camera) and blank out those type of scenes.
(like a driverless car, sensing in it's path a wall/pole/obstuction...swerves)




Mel Smith

unread,
Aug 4, 2021, 10:21:58 AMAug 4
to
On Tuesday, August 3, 2021 at 11:52:23 PM UTC-6, timec...@gmail.com wrote:
> On Monday, August 2, 2021 at 7:53:56 PM UTC+5:30, meds...@gmail.com wrote:
> > Interesting discussion !
> > I'd like both you and Ella to continue digging deeper ...
> > -Mel
> actually mel,
> ella (and most others) are just way ahead than me in their understanding/knowledge of tech/computers/science... and their functioning...
> i am just not in their league...
> but, am very keen to augment my knowledge by interacting/studying...
>
>
> this may seem over-simplistic:
>
> in gmail, if you type PFA in the subject/matter,
> and you hit 'send', forgetting/without the attachment, gmail prompts to add the attachment.
> is this sort of AI/ML or just plain coding.

I just 'sent' an email with PFA as the 'Subject', and nothing else. So, I guess my gmail is stupider than yours :))

> .
> if a family(with children) is streaming a movie on a smart tv,
> and some 'X' rated content is about to be streamed on the screen...
> would that smart TV (networked), sense presence of children (from the tv camera) and blank out those type of scenes.
> (like a driverless car, sensing in it's path a wall/pole/obstuction...swerves)


So, I join you in not being sophisticated in AI.

Maybe we should get some help from Ella and see how she would use the Harbour language to create a pseudo-intelligent sample process.

As humans, we 'wonder' and are 'curious'. So, how can one build a proggy (in xHarbour) that wonders. Maybe,


Function Wonder(lStarttime, cSubject, etc, etc)
// do magical stuff here
Return cConjecture

-Mel

Ella Stern

unread,
Aug 4, 2021, 5:18:37 PMAug 4
to
I'm an IT software generalist - a human assistant :-)

Google Mail is a plain web app made responsive in the browser tab by a bunch of JavaScript.

The "smart TV" most probably has a parental control feature turned on and configured by one of the parents (numerous milennials have a passion for configuring their vacuum cleaners, and managing their smart home devices, or even home networks).

In the Windows OS the .NET subsystem has a memory manager controlled directly by the OS, and the .net applications running in that subsystem are designed under the assumption that the garbage collection happens non-deterministically, so there is no need to be called by programmers, and even if it's called, it will do its job when having the okay from the OS.

Handling multiple processes, and eventually multiple threads in some processes are approaches with long history, and since the generalization of multi-core processors actually many things in a computer or even a cheap phone do happen in non-deterministic order.

In case of a browser the user might have dozens of tabs open, each tab with some visual content and eventually one or more background processes (downloading an image or PDF, uploading some .json data to a server, contacting a trusted server for verifying a certificate etc.).
All those visuals (sitting in rectangular screen areas) are sharing a unique foreground thread. In .NET:
- the user's interaction with the computer does always happen via the foreground thread (hitting keys, buttons etc.)
- there is a dispatcher module collecting and managing requests coming from the background threads in order to refresh some content in the browser tabs

In general when a .NET application is not used for a specific amount of time, it's suspended, and its memory image is zipped. The suspending process is also considered non-deterministic, the programmer does not have the possibility to call some code when it happens (because the memory management is controlled by the OS).

The Apple computers also have highly sophisticated OSes, and in general the client devices are dealing with a much lover level of uncertainty than the cloud-based side of things.

When I was learning about how Unity projects work, I've seen that there is a "scene" (a rectangular screen area), which is holding a number of entities, and each entity has to be set up with detailed attributes (size, color, texture) and behavior (when clicked by user / touched by entity then do something in a parametrized manner). The game engine is managing all those user events and "virtual events" generated by the entities moving, spinning, zooming, talking etc.

Uncertainty is baked in event-driven software and reactive software - two major design approaches employed in complex applications.

I think Reinforcement Learning is the area that might be of your interest.

timepro timesheet

unread,
Aug 5, 2021, 2:42:22 AMAug 5
to
'I just 'sent' an email with PFA as the 'Subject', and nothing else. So, I guess my gmail is stupider than yours :))'

mel, mail from a desktop and check.

message goes like this:

[Gmail
It seems like you have forgotten to...
You wrote 'find attached'...but there are no files attached....]

timepro timesheet

unread,
Aug 5, 2021, 2:57:20 AMAug 5
to
ella stern,

-'The "smart TV" most probably has a parental control feature turned on and configured by one of the parents'
:again, that was programmed by a living human - right?

-'The suspending process is also considered non-deterministic, the programmer does not have the possibility to call some code when it happens (because the memory management is controlled by the OS).'
:(because the memory management is controlled by the OS) - the OS that was coded by programmer/s.

-'so there is no need to be called by programmers, and even if it's called, it will do its job when having the okay from the OS.'
:the OS written by the coder

also 'non-deterministic' event/process would occur only due to: improper-coding/wrong-syntax/improper-logic...programmed by that human - right?

hope my counters(not opposition) are lucid?






Dan

unread,
Aug 6, 2021, 4:13:12 AMAug 6
to
Il 04/08/2021 23:18, Ella Stern ha scritto:

>
> I think Reinforcement Learning is the area that might be of your interest.
>

Here is a small example of a neural network simulation in xHarbour. This
is a very basic program but I think it is a good starting point to
understand how an expert system works.

The ES needs first of all to be trained. It asks 5 questions about a
distinctive attribute to identify the animal you are thinking of. When
it goes wrong, it asks what animal you intended and modifies the
evaluation matrix, adjusting the weights of the attributes.

After a VARIABLE number of questions/answers, the program learns to
distinguish among animals.

The important thing is that the number of steps to correctly identify
animals is not fixed, it depends by the training. Also the matrix could
end up being different with a different questions order.

if you save the matrix, it will be reused at the next run, aka the
training won't be necessary anymore.

compile: hbmake neurnetw

Enjoy!
Dan
neurnetw
neurnetw.PRG

Dan

unread,
Aug 6, 2021, 7:20:29 PMAug 6
to
Il 04/08/2021 23:18, Ella Stern ha scritto:
...

One more thing.
I answered you, but the post is intended for the OP. I credit you of a
vaste knowledge of IT, so I didn't want to teach you anyhting... maybe
I'd have something to learn from you, instead :-)

Dan

Mel Smith

unread,
Aug 6, 2021, 7:55:12 PMAug 6
to
Hi Dan:
I was never able to 'Save' the Rules.txt. It was always empty. i.e., the size of rules.txt == 0
I also note that 'saving' requires an 'S' rather than a Y for Yes.
Anyway thank you for the intro to machine learning in the Harbour language.
I also quickly compiled your proggie under the Harbour fork too (using hbmk2 and MinGW11.2.0 and Harbour Version 3.2)
In xHarbour and Harbour, the program reactions were identical. But, in both case I got a zero-sized rules.txt
Thank you !

-Mel

Mel Smith

unread,
Aug 7, 2021, 12:23:24 PMAug 7
to
Hi Dan:
It appears that your use of 'Handles' has a small flaw in the closing statements of saving the 'rules.txt' file. i.e., lines 98 and 99 where you use fcreate() and fopen(). I fixed this for myself, and it works great. Anyway, I again appreciate this intro to ML. !
-Mel

Dan

unread,
Aug 8, 2021, 3:18:19 AMAug 8
to
Oops Mel, I quickly translated my little program, and since it was in
Italian and "Yes" sounds "Sì" here, I missed that line.

You are right about the file rules.txt. The lines 98-99 are wrong. The
correct syntax is:

nHandle:=fcreate("rules.txt")

There is no need to fopen after fcreate, fcreate returns the file handle
of the already opened file. The prog was written in Clipper many years
ago, and maybe there is a difference in the way fopen and fcreate do
work. For sure, that syntax worked in Clipper.

Since you are interested, here is a little explanation of the behavior
of the prog:
the matrix holds the weights of the properties for every animal. When
the prog has collected the 5 answers, it calculates the row that
totalizes the higher score, simply adding the weights for every "yes"
(property present).
The interesting part is how the program corrects the weights when he
gives the wrong answer:
- it increases the weight of the properties of the right animal where it
got a "yes"
- it decreases the weight of the properties of all the others animals
where it got a "yes".

That's all. This very simple mechanism is incredibly efficient.

In another program, I was able to simulate a creature (let's say an ant)
that starts from a point in the upper row of a "field" (a grid of cells)
and tries to reach its "home" in the bottom row.

The ant moves from one cell to the adjacent at every loop.

To be honest, the ant is more like a pigeon, because it is able to know
the direction it should take to reach home. It knows its coordinates
(x,y) on the grid, and the coords of home (x1,y1), so it can simply
subtract the x,y values to know if it's approaching or stepping away.

But the creature resembles to an ant for the other thing it can do: it
leaves a chemical track on every visited cell.

The ant has two simple rules to follow: approach home and not enter a
cell already visited. Both rules are not mandatory: only discouraged.
But the chemical track is reinforced, should the ant enter a visited
cell, and the smell can become so strong that the ant can overcome the
primary directive to not step away from home!

The ant can calculate which cell is the nearest to home at every loop,
and move accordingly.

The problem that the pigeonesque ant must solve is what to do when it
encounters an obstacle.

If the obstacle is a simple horizontal wall, it is not difficult to
write a program that makes it to follow the wall until it ends and
restart approaching. In fact, the 3 cells below the ant are forbidden,
the 3 cells above discouraged because more far away from home, so the
only choice is to go horizontally.
But when the wall is like a "U" and the ant ends up inside the "U" it's
hard to make it to follow the wall, because it must step away from home
in order to overcome the obstacle. Without the chemical track the ant
would enter an infinite loop going left to right and right to left.

With the ES, the ant adjusts the score of the next move and "realizes"
that it must climb the wall, even if that is against the main directive,
if it wants not to be intoxicated by its own chemical substance.

I'll post the source here if someone is interested.
I was very proud to solve the problem with exactly the same algorythm
seen in neurnetw.prg, used in a ingenious way!

Dan

Mel Smith

unread,
Aug 8, 2021, 9:53:15 AMAug 8
to
Hi Dan:
Thanks for the explanation, and *yes*, I would like to see your ant/pigeon ML program, and test it. It seems intriguing !
-Mel

timepro timesheet

unread,
Aug 11, 2021, 8:43:05 AMAug 11
to
is 'siri' AI ? (in the literal sense - if AI in literal sense truly exists...)
is 'alexa' AI ?

am i right in believing?
any system/machine/robot/driverless car/appliance....
will execute EXACTLY 'ONLY' the 'code/syntax/function/command...' it processes.
(however wrong/off-logic the code/function...may be)

Ella Stern

unread,
Aug 11, 2021, 3:12:27 PMAug 11
to

timepro timesheet

unread,
Aug 14, 2021, 3:10:23 AMAug 14
to
am i right in believing?

any system/machine/robot/driverless car/appliance....
will execute EXACTLY 'ONLY and ONLY' the 'code/syntax/function/command...' it processes.
(however wrong/off-logic the code...) (the written code may/maynot be driven or directed by the OS)

-any 'categoric/unequivocal' input/remark/reply... to above mentioned.-
Reply all
Reply to author
Forward
0 new messages