On Friday, March 17, 2023 at 1:09:51 AM UTC-4, Christopher Lozinski wrote:
> On Thursday, March 16, 2023 at 11:55:26 PM UTC+1, Lorem Ipsum wrote:
> >> The EP16/24/32 seem to have two issues.
> > > The stack uses a very large shift register, instead of some memory.
> > > It uses latches for registers, advised against in my class.
> > I don't find any evidence of either of these being true. I can't find anything that indicates the stacks are shift registers. Looking at ep16.vhd, here is the code for the entire CPU register set.
> Thank you for correcting me. I got it not from the code, but from the documentation.
>
> > "In this design, the CPU latches all data into appropriate registers and stacks on the rising edge of a single phase master clock. Such a synchronous design ensures that all instructions are executed quickly and reliably in a single clock cycle. When the master clock is held steady, the microprocessor retains all data in registers, stacks and memory, consuming very little power. It is thus possible to further reduce its power consumption by reducing the clock rate, or stopping the clock completely.
> ."
>
> I can't find the line where it said that the stacks are implemented as shift registers.
"The T register connects parameter stack and return stack as a giant shift register.
Data can be shifted towards the return stack by a PUSH instruction, and shifted
towards the parameter stack by a POP instruction."
I suspect this is what you read. He is simply describing how the >R and R> instructions (which he calls PUSH and POP) move data back and forth. When you move a data value, it causes the rest of the data stack to move data in the same direction as well.
But even if he was using a shift register, why would that matter?
> "Read the code" is good advice.
> > You can try Mecrisp-Ice (custom stack processor)
I think you munged the attributions. This was from Matthias, not me.
> I did take a look at the Mecrisp-Ice. It runs on the J1a cpu.
> In the past I did look at the J1 CPU. I looked again. It now has 32 bit version, and Python simulators, and verilator to C++ compiler, and even a VHDL version. But it has two problems, one specific, one abstract. If I recall correctly, the specific problem, is that it has no interrupts. The abstract problem is that it was designed to squeeze into a tiny space for a commercial app.
A CPU in an FPGA often does not need interrupts. My CPU design has an interrupt. I tried to optimize my design for ease of calculating speeds, so every instruction is one clock cycle, including the interrupt. It is just a forced call instruction that also pushes the processor status word onto the data stack. It allows very fast servicing of interrupts for hard real time apps.
> I am less interested in building an end user application, and more interested in building a wonderful development environment. A grid of Forth cpu's talking to each other, maybe even interrupting each other. So i think interrupts are critical.
Chuck Moore would not agree with you. The GA144 has no interrupts. It has processors that are dedicated to tasks, so they stop and wait for data or even just synchronization.
Not sure what that means, other than I guess you are not happy with the fact that Verilog has many assumptions that you can override... if you know they are there. I've never learned Verilog, because I've never found a book that explains all the gotchas.
> I am not a full time EE, so an explicit language makes much more sense to me.
> I think I much prefer VHDL. Of course the grass is always greener on the other side of the fence.
VHDL is *very* wordy. It has relaxed a bit in the last decade or two with many new features that make life easy. I noticed the EP16 code uses (clk'event and clk='1') rather than (rising_edge(clk)). Very old school. It is deprecated, because it can trigger on odd events, such as a transition from HIGH to '1'. But it works normally. There are still gotchas. VHDL uses delta delays (think infinitesimal time intervals) to order events that happen when the clock has not advanced. Without that, you can have several FFs toggle on the same clock edge, with some being updated and feeding into others that will be evaluated next. If you run a clock through a buffer, it adds a delta delay, assuring the downstream FFs will receive data updated on the previous delta cycle. Opps! So watch out for any assignments (buffers) in clock paths.
> Thank you everyone for all of the great advice. I feel like you care about these issues. Out of caution, I do not even talk about it at school.
LOL I was in a presentation from a vendor for a new ARM MCU (back when not everyone was selling ARM MCUs). They asked about our applications and I mentioned Forth. The presenter actually laughed and said he didn't think anyone still used it! I just find it more usable than dealing with the tools of C or other HLLs.
I use VHDL, but I guess I'm used to that. Writing code for hardware design is not like writing software... exactly. I think in terms of what I want built, rather than only what it does. VHDL has code that is executed sequentially, but mostly it's concurrent... in parallel. That's the part people have trouble getting used to when they switch from software.
There are many who play/work in Forth because they like it. I guess that's why you are using it for your project. You just like it.
--
Rick C.
+ Get 1,000 miles of free Supercharging
+ Tesla referral code -
https://ts.la/richard11209