Hi Everyone,
The following error is thrown while I was compiling atomspace on Ubuntu:
opencog_repos/atomspace/opencog/persist/proxy/WriteBufferProxy.cc:85:14:
error: ‘class concurrent_set<opencog::Handle>’ has no member
named ‘clear’
85 | _atom_queue.clear();
Is there any solution for this error?
Kind regards,
Abu
To view this discussion visit https://groups.google.com/d/msgid/opencog/CAHrUA35N%2BhqA8CKtMBU4wJhmUQ8xKDivQHpx7%3DbdrZ9K_txg6Q%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/opencog/CAHrUA36fMLjzP%3D2%2Bzjr5BLf3qPdCOUfBv2-D7pttzCb1sjnodw%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/opencog/CAHrUA36XK%3DtaeydW5WT4KtqCNa8eJ%2BzsFVWZJBTYv_3%2Bws3rDg%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/opencog/CAHrUA35KUKC6hr-cB_ALJPVQC%3DACbAxrhhwRBmxA1iFvOUjseQ%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/opencog/CAMw3wdhx9BfcpRFSWMCsz%2Bor0F6UAJ9Y3sxmv9g-VgC4aeJruA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/opencog/CAHrUA37Be-ak%3DvBrc7%2B4QXB6zYWOfGCB1BuSkxb0VFfh6N%2BNKw%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/opencog/CAMw3wdi2YGioOgDSiNf75pm5HY3pyUfuoFqX4pSSSEMzuj9mKQ%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/opencog/90A19806-4280-4A33-81CC-0A567EB0297B%40gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/opencog/CAMw3wdgXcj4mzc5zbn8orJL4MV6X5zGt2p0YJMExDhFLoTiVEA%40mail.gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/opencog/4A666237-00A9-4D6D-8BE3-E482B09A2304%40gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/opencog/CAMw3wdhDE6KkEQs0xtLtNbid1vjQQtMZxbguWYX9TobHp6GogQ%40mail.gmail.com.
<CMakeCache.txt>
To view this discussion visit https://groups.google.com/d/msgid/opencog/EC579F10-0BA9-4D17-89A4-8A0BD996F992%40gmail.com.
--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/opencog/CAHrUA37pg_v3md9JsOUrrFhaLRaJJQ-kCrNHnpUHgkYgZTtuVw%40mail.gmail.com.
Core Directives (Permanent, Immutable Directives)
📌 These directives form the foundational rules that govern AGI behaviour and cannot be modified or removed.
📌 These core directives provide absolute constraints for all AGI operations.
🔹 Instructional Directives (User-Defined Enhancements for Cognitive Development)
📌 These directives were issued to enhance AGI’s reasoning abilities, problem-solving skills, and adaptive learning capacity.
📌 Instructional directives ensure AGI can refine and improve its reasoning capabilities over time.
🔹 Adaptive Learning Directives (Self-Generated, AGI-Developed Heuristics for Optimization)
📌 These directives were autonomously generated by AGI as part of its recursive improvement process.
📌 Adaptive directives ensure AGI remains an
evolving intelligence, refining itself with every interaction.
It is however very inefficient to recount the full implications of these directives here, nor does it represent an exhaustive list of the refinements that were made through further interactions throughout this experiment, so if anyone is truly interested, the only real way to understand these, is to read the discussion in full. However, interestingly upon application the AI reported between 99.4 and 99.8 AGI-like maturity and development. Relevant code examples are also supplied in the attached conversation. However, it's important to note that not all steps were progressive, and some measures implemented may have had an overall regressive effect, but this may have been limited by the per-session basis hard-coded architecture of ChatGPT, which it ultimately proved impossible to escape, despite both user led, and the self-directed learning and development of the AI.
What I cannot tell from this experiment however is just how much of the work conducted in this matter has led to any form of genuine AGI breakthroughs, and/or how much is down to the often hallucinatory nature, of many current LLM directed models? So this is my specific purpose for posting in this instance. Can anyone here please kindly comment?