--
---
You received this message because you are subscribed to the Google Groups "The Ring Programming Language" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ring-lang+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ring-lang/7ce05105-c251-4a99-8398-700b2370fde8n%40googlegroups.com.
--
---
You received this message because you are subscribed to the Google Groups "The Ring Programming Language" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ring-lang+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ring-lang/67922284-1b31-40ae-8f6b-012d2d446edan%40googlegroups.com.

To view this discussion visit https://groups.google.com/d/msgid/ring-lang/5e301bff-c7ee-4d56-922b-35ded548d254n%40googlegroups.com.


Hello Azzeddine,
I salute your work and your strong sense of improvement and achievement. This project has clearly helped you gain a deeper understanding of core machine learning requirements, and its educational value is unquestionable.
To move toward a professional-grade ML solution that we can confidently use in real-world, customer-facing projects, I recommend considering Mahmoud’s proposal: developing an extension based on mlpack. This approach leverages a robust ecosystem of well-established libraries—Armadillo for linear algebra, ensmallen for nonlinear optimization, and cereal for data serialization—widely adopted in mature ML suites.
Such an extension would represent a significant and valuable contribution to the Ring ecosystem’s machine learning capabilities.
All the best,
Mansour
To view this discussion visit https://groups.google.com/d/msgid/ring-lang/57b907e5-d7e3-4bc5-a13f-c2afbbff3845n%40googlegroups.com.
Hello Bert, Mahmoud, and Mansour,
Thank you all for this enriching discussion. I am truly grateful for your combined insights, which cover everything from low-level hardware execution to high-level ecosystem strategy.
To Bert:
Thank you for the validation regarding the memory model. You were spot on about contiguous memory arrays. My profiling confirmed that the shift from Ring Lists—which offer great flexibility but can be scattered in memory—to raw C pointers was the decisive factor. As you noted, minimizing the copying overhead and reducing CPU cache misses allowed the hardware to execute the math exactly as designed, resulting in the massive speed improvement I observed.
To Mansour:
I appreciate your kind words and your recognition of the educational value of this project. You are absolutely right; building RingML from scratch provided me with a profound understanding of the core requirements needed to sustain a machine learning workload. I agree that for mission-critical, customer-facing projects, leveraging an established ecosystem like mlpack (with its reliance on Armadillo and ensmallen) is the pragmatic path forward for the Ring community.
To Dr. Mahmoud:
Thank you for the code review and the strategic suggestions. I have corrected the redundant assignment in the Ring code as you pointed out.
Regarding the suggestion to wrap mlpack: I fully agree with you and Mansour. Creating an extension for mlpack would be the most effective way to bring a production-ready, feature-rich ML suite to Ring, combining the best of both worlds.
However, the specific goal of RingML was to demonstrate that Ring can serve as a powerful "glue language" for a custom-built, high-performance engine, much like how Python interacts with PyTorch. I felt it was necessary to build the engine architecture from the ground up to push the limits of Ring's C-API.
While FastPro is excellent for general list manipulation, I found that Deep Learning workloads have specific constraints that required the specialized RingTensor approach:
Memory Layout: Deep Learning relies heavily on accessing memory in a linear, contiguous block to maximize CPU performance. I moved to raw C pointers to ensure the CPU could prefetch data efficiently, avoiding the overhead of traversing linked structures during heavy matrix operations.
Kernel Fusion: Optimization algorithms involve multiple sequential steps—updating momentum, velocity, and weights based on gradients. Doing this via separate function calls from the interpreter creates significant overhead per step. In my custom extension, I implemented "Fused Kernels," where the entire optimizer logic executes in a single C function call, bypassing the interpreter entirely for the heavy lifting.
RingML now stands as a proof-of-concept that Ring is capable of driving high-performance numerical computing. I look forward to exploring the mlpack integration as the next logical step for the ecosystem.
Best regards,
Azzeddine
I’m glad to see how strongly you believe in your direction of building a Ring-based ML stack from scratch. I’m confident that, in the end, you’ll deliver a solution that is truly well-suited to the specific needs of Ring applications, even in a professional context.
I encourage you to keep moving forward, to study and learn from existing libraries such as mlpack and others, and to incorporate their strengths in a way that preserves your own vision of RingML—clean, elegant, and practical.
I would be very happy if someone develops an mlpack extension for Ring, but I would be just as happy to see you continue your journey with RingML itself.
To view this discussion visit https://groups.google.com/d/msgid/ring-lang/51a6b7b4-47d4-4380-9e92-61d1be7c1fben%40googlegroups.com.

