Hello,
Corewar/Redcode94/PMars94/ICWS94 Tiny Validators are now available at the following github link:
https://github.com/SkybuckFlying/Redcode94TinyValidators
This is a collection of tiny 400.000+ files (1 kb input, 2k coredump).
These files allow Corewar Simulators/Executors to be "validated/verified" for correct functioning/conforming to the ICWS94 draft/specification and PMars94 execution environment.
Here is more information about these files and how to use them:
# Redcode94/PMars94/Corewar/ICWS94 Tiny Validators
## How to use these tiny validators to validate your core (ICWS94 draft) executor:
1. Clone git repository (tiny validators size is 2 GB, git might
require another 2 GB, 4 GB Virtual Disk on 8 GB RAM Disk
recommended for 16x speed up performance.
2. Start your core executor.
3. Set core size to 16.
4. Set private space size to 16.
5. Load instruction program (.red) into the core at position 0.
6. Load private space sequential numbers (.pspace) into private space at
position 0.
7. Set maximum cycles to 3.
8. Execute the core starting at position 0.
9. Load core dump into memory (.coredump.red).
10. Compare executor core to coredump.
11. Compare executor private space to coredump.
12. Compare executor threads to coredump.
13. Compare executor thread positions to coredump.
14. Verify they are the same if not your executor contains a bug.
15. Repeat for next instruction.
.red = input to PMarsW95 v0.9.2-5.23 by Skybuck Flying
.coredump.red = output from PMarsW95 v0.9.2.5.23 by Skybuck Flying
## Command prompt example for PMarsW95
## (loading instruction test program, dumping core, and loading private space, core size 16, pspace size 16, cycles 3, max length 10, distance 10):
pmarsw95 ".\TinyValidators\000001 DAT\000001 DAT.A Number -2, Number -2.red" -s 16 -S 16 -X ".\TinyValidators\000001 DAT\000001 DAT.A Number -2, Number -2.coredump.red" -l 10 -d 10 -c 3 -T -Z -Y .\TinyValidators\sequential.pspace
## Git remarks:
1. Git is slow for many files (400.000+)
2. Git consumes additional harddisk space for adding/commiting.
3. Git wastes time on refreshing index after copieing files/git repo (refreshing can be disabled, no experience with it)
## Github remarks:
4. Github file size limit of 100 MB prevents uploading of large
virtual harddisk of 2 GB nececessary to contain these files, so
uploading them seperately.
## Performance advise (use RAM Disk):
4. Copy virtual harddisk to a RAM Disk.
5. Mount the virtual harddisk from the RAM Disk.
6. Access the RAM Disk backed virtual harddisk for increased
performance.
(Alternatively git clone can also be done directly to RAM Disk)
## RAM Disk Caution (DATA Loss):
7. If making any changes make sure to copy virtual harddisk back to a real
harddisk disk before shutting down the RAM Disk, otherwise all changes lost.
## About PMarsW95 (modifications for validating):
PMarsW95 is a modification of PMars94 to allow loading of private space
values and dumping of core. It has additional command line parameters.
PMarsW95 is not included in this repository.
## About folders:
Each redcode instruction has it's own folder with all it's variations of
modifier, addressing mode, negative and positive test values are used.
Each variation is numbered at the start of the .red filename.
The folders contain a number at the start which indicates at which
variation number the instruction starts, this may be handy to refer to
inner loops of verification programs/debugging.
In total there are: 212800 instruction variations.
There is also a special test program to test a "perfect spawn".
## Here is an example of the files:
```
Input textfile name:
.\011201 MOV\011201 MOV.A Number -2, Number -2.red
Input textfile contents:
nop.f $-4, $-3
nop.f $-2, $-1
MOV.A #-2, #-2
nop.f $ 1, $ 2
nop.f $ 3, $ 4
Output textfile filename:
.\011201 MOV\011201 MOV.A Number -2, Number -2.coredump.red
Output textfile contents:
;warrior[0].tasks: 1
;warrior[0].taskHead[0]: 3
;warrior[0].pspace[0]: 1
;warrior[0].pspace[1]: -7
;warrior[0].pspace[2]: -6
;warrior[0].pspace[3]: -5
;warrior[0].pspace[4]: -4
;warrior[0].pspace[5]: -3
;warrior[0].pspace[6]: -2
;warrior[0].pspace[7]: -1
;warrior[0].pspace[8]: 0
;warrior[0].pspace[9]: 1
;warrior[0].pspace[10]: 2
;warrior[0].pspace[11]: 3
;warrior[0].pspace[12]: 4
;warrior[0].pspace[13]: 5
;warrior[0].pspace[14]: 6
;warrior[0].pspace[15]: 7
00000: NOP.F $ -4, $ -3
00001: NOP.F $ -2, $ -1
00002: MOV.A # -2, # -2
00003: NOP.F $ 1, $ 2
00004: NOP.F $ 3, $ 4
00005: DAT.F $ 0, $ 0
00006: DAT.F $ 0, $ 0
00007: DAT.F $ 0, $ 0
00008: DAT.F $ 0, $ 0
00009: DAT.F $ 0, $ 0
00010: DAT.F $ 0, $ 0
00011: DAT.F $ 0, $ 0
00012: DAT.F $ 0, $ 0
00013: DAT.F $ 0, $ 0
00014: DAT.F $ 0, $ 0
00015: DAT.F $ 0, $ 0
pspace initialization textfile filename:
.\sequential.pspace
pspace initialization textfile contents:
;warrior[0].pspace[0]: -8
;warrior[0].pspace[1]: -7
;warrior[0].pspace[2]: -6
;warrior[0].pspace[3]: -5
;warrior[0].pspace[4]: -4
;warrior[0].pspace[5]: -3
;warrior[0].pspace[6]: -2
;warrior[0].pspace[7]: -1
;warrior[0].pspace[8]: 0
;warrior[0].pspace[9]: 1
;warrior[0].pspace[10]: 2
;warrior[0].pspace[11]: 3
;warrior[0].pspace[12]: 4
;warrior[0].pspace[13]: 5
;warrior[0].pspace[14]: 6
;warrior[0].pspace[15]: 7
```
## Example how to run the modified PMarsW95 executable to load test programs/pspace, set core size, pspace size, cycles and produce coredumps:
ms-dos prompt/batch file command:
```
pmarsw95 ".\011201 MOV\011201 MOV.A Number -2, Number -2.red" -s 16 -S 16 -X ".\011201 MOV\011201 MOV.A Number -2, Number -2.coredump.red" -l 10 -d 10 -c 3 -T -Z -Y .\sequential.pspace
```
## command line parameters for PMars95W:
```
PMARSW95 v0.9.2-5.23 created on 5 april 2010 by Skybuck Flying
Corewar simulator with ICWS'94 and Extension 2009
New Features Added: Instruction limit 8000 (-l), Core Dump (-X filename),
Thread Dump (-T), PSpace Dump (-Z), PSpace Load (-Y filename),
Performance (-M), Warrior TextDump (-W), Warrior BinaryDump (-A)
Copyright (C) 1993-95 Albert Ma, Na'ndor Sieben, Stefan Strack and
Mintardjo Wangsaw, Copyright (C) 2009-10 Skybuck Flying (Harald Houppermans :))
Usage:
pmars [options] file1 [files ..]
The special file - stands for standard input
Options:
-r # Rounds to play [1]
-e Enter debugger
-s # Size of core [8000]
-b Brief mode (no source listings)
-c # Cycles until tie [80000]
-V Verbose assembly
-p # Max. processes [8000]
-k Output in KotH format
-l # Max. warrior length [100]
-8 Enforce ICWS'88 rules
-d # Min. warriors distance
-f Fixed position series
-F # Fixed position of warrior #2
-o Sort result output by score
-X $ Dumps core to file $
-T Dumps threads to coredump
-Z Dumps pspace to coredump
-Y $ Loads pspace from file $
-M Performance measurement
-W $ Dumps warrior(s) to text file $
-A $ Dumps warrior(s) to binary file $
-S # Size of P-space [1/16th core]
-= $ Score formula $ [(W*W-1)/S]
-@ $ Read options from file $
```
## Skybuck's Corewar Simulator:
Skybuck's Corewar Simulator can be found on the link below which contains more information and links to PMarsW95 which was used to validate this corewar executor/simulator.
The instruction set was later modified to allow more powerfull self-modification of warriors, it remains compliant with the standard if those extensions to the instructions are not used/don't appear in the warriors.
https://www.skybuck.org/Corewars/SkybucksCorewarsSimulator/version%200.18/
Newer versions can be found here:
https://www.skybuck.org/Corewars/SkybucksCorewarsSimulator/
Bye for now,
Skybuck.
P.S.: A person with letters: K.R. contacted me about tiny validators, he found a posting of me in 2009. He was disappointed he could not find any files to help him validate a possible executor written by him.
I checked my files and still have the generators and such, so I re-generated these files, re-dumped the cores and stored them on disk. That all went relatively fast until I tried to "git" it.
Git was painfully slow. Took 4 hours to do git add . on a harddisk.
Today I tried again on a RAMDisk to see if it would make a difference still took 15 minutes on a RAMDisk/DDR5.
Generating/core dumping only took a few minutes.
It remains a mystery why "git add . " is so slow. I tried to debug git in Microsoft Visual Studio 2022 but as usual run into some issue, this time it was sh.exe missing on path, added it, but then another weird issue resulted.
Storing these files in a virtual harddisk on a ramdisk was also tried, while it worked, it caused further problems for git and github such as:
1. running out of space on virtual disk because git requires more space.
2. github does not allow files larger than 100 MB.
So ultimately all files/folders were git commited and uploaded as is, loose which is kinda nice, now people can dive into each instruction on a need to know basis if so desired, which is kinda what I wanted.
To get these files on your harddisk:
1. Install git.
2. Have a working internet connection.
3. Start git bash command line tool.
4. cd to a nice folder of your chosing and then type the magical command:
git clone
https://github.com/SkybuckFlying/Redcode94TinyValidators.git
Example of what you will see:
$ git clone
https://github.com/SkybuckFlying/Redcode94TinyValidators.git
Cloning into 'Redcode94TinyValidators'...
remote: Enumerating objects: 411049, done.
remote: Counting objects: 100% (411049/411049), done.
remote: Compressing objects: 100% (213170/213170), done.
remote: Total 411049 (delta 197880), reused 411045 (delta 197879), pack-reused 0
Receiving objects: 100% (411049/411049), 29.35 MiB | 19.55 MiB/s, done.
Resolving deltas: 100% (197880/197880), done.
Updating files: 40% (172872/425604)
Trying it now ! ;) seems to be working ok, may take a few minutes depending on your hardware.
For the record:
Atto disk benchmark: 138 Megabyte/sec for RAMDisk for 1 KB files.
Atto disk benchmark: 274 Megabyte/sec for RAMDisk for 2 KB files.
Atto disk benchmark: 530 Megabyte/sec for RAMDisk for 4 KB files.
Roughly 140K I/O/sec for all 3.
Atto disk benchmark: 106 Megabyte/sec for SSD for 1 KB files.
Atto disk benchmark: 214 Megabyte/sec for SSD for 2 KB files.
Atto disk benchmark: 420 Megabyte/sec for SSD for 4 KB files.
Roughly 110K I/O/sec for all 3.
(I "assume" cluster size is 4 KB on my NTFS file systems, also known as "allocation unit size" which is set to 4 KB.)
I did not use SSD though, SSD will remain for more important stuff to prevent wear and tear. SSD speed was tested though just out of curiosity.
So conclusion RAMDisk performance slightly better than SSD performance.
Further conclusion: No need to destroy SSD if RAMDisk can be used.
Problem with RAMDisk might be:
Size too small, limited to RAM Size.
Data loss risk, but same can be said of SSD long term ! ;)
Plus ofcourse must write back to disk after usage/changes.
For now normal harddisk usage is recommended, they can with RAM chips/caching too ! ;)
Git should be looked at by developers though, it's unusually slow for harddisk, possibly caused by excessive read/write/read/write/read/write head movements.
Better would be read/read/read/read, write/write/write/write.
Further risks of RAMDisk usage might be: "rowhammer effects" in other words bit flips. Not sure how common these are during processing.
I am not sure if I knew about github in 2009, not sure if it was available back then... my memory says not... but wiki says it existed...
Well whatever it may be... welcome to the future ! ;) =D
In case you haven't seen it yet, this is my super PC in 2023 and it was used to generate/upload/write all of this... Microsoft Windows 11 buggy and many functionalities missing/features missing and edge kinda buggy too sometimes, monitor buggy/dimming too, monitor power button buggy, though I am providing lots of feedback to Microsoft how to improve Windows 11, a first in 30 years, maybe it will help. Perhaps start menu will be made bigger on 4K screens ! ;)
Also that reminds me I should probably bitch that it's hard to see if the I or L is an i or an l with edge fonts... hmmm... such noobs...
https://www.skybuck.org/Hardware/SuperPC2023/Skybuck's%20SuperPC%20for%202023%20design%20version%2011%20final.txt
Bye for now,
Skybuck.