I'm an embedded software engineer by trade, so I'm quite comfortable
writing PIC/AVR/8051/etc code, and have been considering some of the
newer (and cheap!) USB microcontrollers out there. So I've started
working on a project to do the USB<-->CBM cable. As part of my
background search, there is reference to the XU1541 cable, but all the
links I have found are broken. Does this cable still exist? Are
there archives of the work that was done?
I saw some discussion of integrating the XU1541 into OpenCBM. I've
downloaded OpenCBM, and it appears to be a toolset plus a low-level
driver. I haven't explored it much, but it appears to bit-bang the
parallel port via the X1541 (et al) cable.
Not to stir up the USB vs parallel port debate, I wanted to announce
my own work on a USB cable. I have created a website and blog (my
first ever, so don't expect much) documenting my ideas and the
project. I'm open to ideas, and help. Check out my blog at
http://mudplace.org/?cat=u1541 for more information.
I think the biggest barrier to the USB version of the cable is cost.
X1541 cables are very inexpensive and easy to make. A USB solution
requires several more dollars (probably close to $30 in parts in small
quantity purchases). However, I think a USB solution could be much
cleaner in terms of ease of use, offloading the bit-banging to a
microcontroller, and compatibility with newer machines.
Anyways, if anybody is interested, I'd be glad to hear from them.
Pete
> I've been thinking about a different interface cable to the 1541 (and
> other serial devices) for some time. Using the X1541 et al has been
> troublesome for me.
[..]
> Not to stir up the USB vs parallel port debate, I wanted to announce
> my own work on a USB cable.
[..]
> A USB solution requires several more dollars (probably close to $30
> in parts in small quantity purchases).
... if you'd really put this into a real thing then I'd buy one. 30
bucks is not that much.
I don't want the parallel port anymore. No, I don't want to stir up the
discussions about pros and cons either but fact is, less and less
machines have one. I for one have only on PC (old laptop) left that has
a parallel port. That's why I don't want it. It works fine, but for how
long? I changed its hard disk once and put a 2 GB disk into it. But
what about today's sizes? Can it handle at least a fraction of them, i.
e. can I plug in a modern disk at all? You see my worry. This old
laptop will inevitably die one day and then I don't have a computer
with parallel port anymore... Hence, looking for alternatives. And USB
would definitely be a great alternative.
--
cul8er
> Anyways, if anybody is interested, I'd be glad to hear from them.
>
Consider me interested :D
-------- Original Message --------
Subject: [Opencbm-devel] xu1541 cancelled
Date: Thu, 27 Dec 2007 12:53:43 +0100
From: Till Harbaum / Lists
Reply-To: list for opencbm development
To: list for opencbm development
References: <20070128114...@trikaliotis.net>
Hi all,
i hope you all had a pleasant christmas!
Since the xu1541 seems to have become a big undocumented mess and since
there's no visible progress that allows users to actually use the xu1541
i have decided to cancel the project. I have also not managed to get
into a discussion with payton about the units i sent him so i consider
these units to be lost.
I have removed the xu1541 web page and have asked Christian to also
remove the entries from his AVR USB page.
Since everything is open source you are of course free to continue your
work. And i would appreciate an email if you actually release something
based upon the xu1541.
I'll now unsubscribe from this list, so if you have some urgent replies
please CC me directly.
Regards,
Till
---------------------------------
However, instead of re-inventing the wheel, join the OpenCBM-devel list
(https://lists.trikaliotis.net/listinfo/opencbm-devel) and see if anyone
has the firmware. It was an AVR-based solution, with soft USB. If you
want to use a newer AVRUSB device, I am sure you can ditch the soft USB.
A nice bonus is that XU1541 support is built into the OpenCBM
routines, so no integration costs.
> Not to stir up the USB vs parallel port debate, I wanted to announce
> my own work on a USB cable. I have created a website and blog (my
> first ever, so don't expect much) documenting my ideas and the
> project. I'm open to ideas, and help. Check out my blog at
> http://mudplace.org/?cat=u1541 for more information.
>
> I think the biggest barrier to the USB version of the cable is cost.
> X1541 cables are very inexpensive and easy to make. A USB solution
> requires several more dollars (probably close to $30 in parts in small
> quantity purchases). However, I think a USB solution could be much
> cleaner in terms of ease of use, offloading the bit-banging to a
> microcontroller, and compatibility with newer machines.
There were plenty of people interested, so I think the cost is not an issue.
Jim
pet...@mudplace.org wrote:
> I'm an embedded software engineer by trade, so I'm quite comfortable
> writing PIC/AVR/8051/etc code, and have been considering some of the
> newer (and cheap!) USB microcontrollers out there. So I've started
> working on a project to do the USB<-->CBM cable. As part of my
> background search, there is reference to the XU1541 cable, but all the
> links I have found are broken. Does this cable still exist? Are
> there archives of the work that was done?
See here:
http://www.trikaliotis.net/xu1541
Note that the links are not (yet) working. However, the sources and the
schematics (Eagle format) are in the OpenCBM CVS.
> I saw some discussion of integrating the XU1541 into OpenCBM. I've
> downloaded OpenCBM, and it appears to be a toolset plus a low-level
> driver. I haven't explored it much, but it appears to bit-bang the
> parallel port via the X1541 (et al) cable.
Right.
Note that the integration into OpenCBM is done. In fact, the only part
still missing is the installation, which needs some manual steps at the
moment.
I am currently working on this. As soon as I am not ill anymore, I hope
to be able to finish it. Thus, some testers would be good (then).
> Not to stir up the USB vs parallel port debate, I wanted to announce
> my own work on a USB cable. I have created a website and blog (my
> first ever, so don't expect much) documenting my ideas and the
> project. I'm open to ideas, and help. Check out my blog at
> http://mudplace.org/?cat=u1541 for more information.
>
> I think the biggest barrier to the USB version of the cable is cost.
> X1541 cables are very inexpensive and easy to make. A USB solution
> requires several more dollars (probably close to $30 in parts in small
> quantity purchases).
IIRC, the XU1541 cost less than EUR 15,- - that is, ignoring the
soldering, only the parts.
The important part for the XU1541 which makes it so cheap is that the
USB protocol is performed completely in software.
> Anyways, if anybody is interested, I'd be glad to hear from them.
If you want a solution and to integrate it into OpenCBM, please
1. Use the current CVS version of OpenCBM, and
2. contact me so we can work together on this.
Regards,
Spiro.
--
Spiro R. Trikaliotis http://opencbm.sf.net/
http://www.trikaliotis.net/ http://www.viceteam.org/
I've been playing with that idea for sometime. I have look at AVR based
solutions, and I'm convinced it is doable. Unfortunately I have already
too much stuff ordered that I never got around to put to good use (due
to a structural lack of time).
> I think the biggest barrier to the USB version of the cable is cost.
Maybe it isn't that bad; today I saw a kit for €8,- which could, with
very few additional parts, turned into a 1541-to-USB converter:
http://www.samenkopen.net/action_product/878690/767990
With AVR it is pretty easy to have cycle exact timing, and they are
pretty fast too (compared to a 6502). I believe it is possible to speed
up the transfer between the AVR and the 1541 with a dedicated transfer
protocol.
Good luck with your project.
you wrote:
> pet...@mudplace.org schreef:
>
>> I think the biggest barrier to the USB version of the cable is cost.
>
> Maybe it isn't that bad; today I saw a kit for €8,- which could, with
> very few additional parts, turned into a 1541-to-USB converter:
> http://www.samenkopen.net/action_product/878690/767990
in his Blog the original poster also told about
the fairly new USB controllers from ATmel, the
AT90USB series.
In my opinion too this is a very good choice.
Instead of letting the microcontroller do both
protocols purely in software (bitbanging the USB
bus in software as well as the target application,
the Commodore IEC bus), there could be enough
performace to the IEC bus as well as additional
stuff. Like fast parallel cable support, maybe
fast enough to being able to support the MNib
parallel Burst protocol (without extra 8K RAM
buffer); well, I'm unable to estimate how tight
the implementation would become in the end.
Womo
Perhaps the AT90 or the UC3B can alleviate these problems. A key
sentence from your website about the XU1541 is "A CBM IEC floppy
emulation requires an interface to respond faster to incoming requests
from the C64 than the xu1541 currently can do with the software USB
implementation." Offloading the USB work to a hardware interface can
speed this up. However, I don't know all the details about the serial
bus and timings, and the latency of the USB connection could be too
large regardless of the speed of the microcontroller. There could be
more on-board RAM that could speed this up, but before building it,
this should be determined.
As for the cost, I guesstimated about $30 just based on the big
parts. If the uC is about $10, PCB's around $10 (see ExpressPCB or
PCB123), and possibly external SRAM for $5, I imagined the rest of the
misc parts like connectors, resistors, caps, etc to run another $5.
Of course this price drops dramatically in order quanties larger than
2 or 3. For prototyping, I'm cool with this. And once a prototype
works, if a big enough group of people were to want a cable, a bulk
order could be done to shave the cost.
Also, the beauty of these USB parts from Atmel, is that they come pre-
configured with bootloaders. So no need for custom ones. This makes
things a lot faster to get up and running. Especially for those that
want to build their own. If the part list, schematic, and PCB layouts
were available, people could make their own and not need any special
programming hardware (like the JTAGICE mkII or AVRisp).
Spiro, I'll talk with you more about this as the project progresses.
As for Womo's mention of MNib, I'm clueless on this front. As I
understand nibblers, they require the _drive_ to have the additional
memory. I'm probably way out on this one. If the serial interface
was responsive enough, would it be possible to do a nibbler in the
cable? That is, if the cable had 8K of RAM available to hold an
entire track, would it be possible to do the nibbler application?
That is something I could certainly work into the system.
Last time I checked you could get a USB 3.5" floppy drive. Commodore was
ahead of their time with an external, serial port drive when they came out
with the 1581. ;-)
--
Best regards,
Sam Gillett
Change is inevitable,
except from vending machines!
On Jan 21, 10:43 am, Jim Brain <br...@jbrain.com> wrote:
> However, instead of re-inventing the wheel, join the OpenCBM-devel list
> (https://lists.trikaliotis.net/listinfo/opencbm-devel) and see if anyone
> has the firmware. It was an AVR-based solution, with soft USB. If you
> want to use a newer AVRUSB device, I am sure you can ditch the soft USB.
> A nice bonus is that XU1541 support is built into the OpenCBM
> routines, so no integration costs.
I'll do that. In my musings on my blog, I was considering the
implemention of the protocol between PC and the cable. And OpenCBM
seemed a good choice. And already existing integration into a toolset
would be a huge bonus. No front-end work would be required, just the
back-end. And I could steal a lot from the XU1541.
It sounds like dropping the soft USB would be a big help in terms of
performance. It looks like the ATmega8 and AT90USB use the same
processor core, so there won't be a big jump in internal performance.
However, offloading all the USB stuff into hardware, the additional
RAM and flash, and that the AT90USB can be run up to 16MHz help a
lot. Also, the AT90USB supports external memory, so it is possible to
stuff up to an additional 64K of external SRAM to help.
> It does exist, but political infighting or something made the original
> author grow uninterested in the project. At the beginning of the year,
Political in-fighting concerning the production of a cable to connect an
antique disk drive to a PC?
Now I have, literally, heard everything.
I may have taken some liberties. I wasn;t following the cable closely,
but it was a serious concern for a few months early in the year, then it
died down. I assumed the cable was done, so I was surprised to see the
announcement. Later responses spoke of boards not paid for, lack of
direction, etc. It's all in the archives.
I am on other things, but VIP (Virtual IEC Peripheral) is still sitting
here, waiting for me to fix the IEC routine bugs). It's next on my list
after uIEC finish (I'm working on that now). THough, VIP goes the other
way (64 to PC, like a 64HDD cable, but over serial port (and soon USB,
once the other pieces are in place)
All I can say is:
Do *NOT* underestimate the effort of handling the IEC routines. The
basic stock routines are tricky enough, though you can get them down by
poring over the PRG and having a logic analyzer to check the corner
cases. But, speeders like JiffyDOS and others are a whole new ball game.
Jim
I'm glad you mentioned the archives. When I signed up for the OpenCBM
list, I signed up for opencbm-users, and I couldn't find reference to
this. Now of course, I found the opencbm-devel, where I subscribed.
> I am on other things, but VIP (Virtual IEC Peripheral) is still sitting
> here, waiting for me to fix the IEC routine bugs). It's next on my list
> after uIEC finish (I'm working on that now). THough, VIP goes the other
> way (64 to PC, like a 64HDD cable, but over serial port (and soon USB,
> once the other pieces are in place)
The VIP sounds like something I have been interested in doing also.
And the hardware for the VIP and the cable idea I am working on could
easily do both with different firmware (or modes in the firmware).
And it could be USB instead of serial.
What is the uIEC?
> All I can say is:
>
> Do *NOT* underestimate the effort of handling the IEC routines. The
> basic stock routines are tricky enough, though you can get them down by
> poring over the PRG and having a logic analyzer to check the corner
> cases. But, speeders like JiffyDOS and others are a whole new ball game.
The implemention of the serial protocol is a concern of mine. And
implementing it on something fast and plentiful on resources would
allow for a lot of flexibility. I don't know how well understood the
protocol is--even amongst hobbiest "experts"--or how well documented
it is. I hope, though, that the PC-->USB-->1541/71 would be
significantly easier than the C64-->USB-->PC would be.
> The VIP sounds like something I have been interested in doing also.
> And the hardware for the VIP and the cable idea I am working on could
> easily do both with different firmware (or modes in the firmware).
> And it could be USB instead of serial.
Yes, it could. TO reduce the variables, I chose serial first, knowing
that once the protocol was solid, it would be an easy thing to add USB
support.
>
> What is the uIEC?
IEC to CompactFlah and IDE.
> The implemention of the serial protocol is a concern of mine. And
> implementing it on something fast and plentiful on resources would
> allow for a lot of flexibility. I don't know how well understood the
> protocol is--even amongst hobbiest "experts"--or how well documented
The basic protocol is marginally documented, and none of the speeders
are documentated officially at all.
I have a JiffyDOS implementation that works (in so far that I have
loaded and saved data with it), as does Unseen on the sd2iec project
(his is in ASM, mine is C) Both of use used IEC2IEEE docs to create our
implementations (and a logic analyzer)
> it is. I hope, though, that the PC-->USB-->1541/71 would be
> significantly easier than the C64-->USB-->PC would be.
I think they are about the same. I suppose the advantage of your
initial direction is the lack of need to watch for commands coming from
the 1541.
Jim
Is the protocol work on the VIP already in the XU1541 or other X1541
related programs? If not, will it be available for porting to such?
> > What is the uIEC?
>
> IEC to CompactFlah and IDE.
Another excellent candidate for an embedded system. Though the CF (I
assume you mean CF in IDE mode) interface is not something natively
supported on many "easy to use" microcontrollers. ST has a part (the
ST7265x) that has a native CF interface, though it lacks support for
external memory, and is limited to 32K of flash and 5K of RAM. And
Atmel actuall has a variety of 8051 based solutions. An interesting
one is the AT80C51SND1C which has the CF/IDE interface and a USB
controller. Again, limited in flash and RAM (64K + 4K boot and 2K,
respectively), and lots of useless stuff (an MP3 decoder and PCM audio
interface). (Reading a bit deeper in the AT80C51SND1C datasheet, it
mentions support for external memory. There is no automatic chip-
select generation, but this can be done via GPIO if the accesses
aren't totally automatic, or via some external address decode (a
CPLD?). Also the RD/WR for this memory is shared with the CF/IDE IORD/
IOWR lines. Perhaps not a problem if chip-selects are done via GPIO.)
The reason I throw all these options out is that there could be a
single board to meet a variety of needs. Basically a uC, an IEC port,
a USB port, and a CF/IDE interface. Heck, throw in an SPI interface
and you have support for SD cards as well. The trick is to find a
single uC that supports USB, CF/IDE, SPI (and possibley MMC) with the
lots of flash and ram (or at least support for external memory). Then
just a PCB with the processor and stuffing options for all the
connectors--possibly all of them--and we can do all the work on a
single board.
Of course, one could always bit-bang the CF/IDE interface....
> > The implemention of the serial protocol is a concern of mine. And
> > implementing it on something fast and plentiful on resources would
> > allow for a lot of flexibility. I don't know how well understood the
> > protocol is--even amongst hobbiest "experts"--or how well documented
>
> The basic protocol is marginally documented, and none of the speeders
> are documentated officially at all.
>
> I have a JiffyDOS implementation that works (in so far that I have
> loaded and saved data with it), as does Unseen on the sd2iec project
> (his is in ASM, mine is C) Both of use used IEC2IEEE docs to create our
> implementations (and a logic analyzer)
The sd2iec looks interesting, as well as the 1541 Ultimate. Both of
which are interesting projects in themselves as well.
I notice that often there is mention of limitations on all of these
cables because of memory and/or processing power. This is making me
question the suitability of using a simple 8-bit microcontroller for
this development. Perhaps the UC3B is the best option. With the SPI
interface, it can support SD cards. And (perhaps) the 32K of internal
RAM would be sufficient (though external memory access would be a
boon).
And looking more closely at the UC3A, it appears to be more feature
rich. It has up to 64K of internal RAM, support for external SRAM and
SDRAM (a whopping 16MB for $2). It also has USB, an ethernet MAC(!),
and SPI (for those SD cards). The static memory controller looks to
be configurable and could directly support the CF/IDE interrface. And
it runs up to 66MHz. Certainly a more complex device, but a device
with much more flexibility.
Pete
> Another excellent candidate for an embedded system. Though the CF (I
Which describes the uIEC (www.jbrain.com/vicug/gallery/uIEC) It uses an
ATMEGA128 uC for all the heavy lifting, with an option for the external
memory (but not required). All of the hardware is finished, and SW is
coming along nicely this week (FAT12/16/32/LFN support working for read
and write, integrating the IEC routines and adding the CMD HD-style
subdir support is progressing)
I was working on my own FAT16/32 LFN library, but I recently found FatFs
which did all of the pieces I had yet to do (cluster chaining, etc.), so
I added LFN support to FatFs and modified it so it can use a single 512
byte buffer for all disk accesses. It works for IDE, CF, and MMC/SD,
which is the target peripherals for uIEC.
FatFs requires 512 bytes for FS/FP buffer, 256 bytes for LFN name,
leaving 3.25K free on M128. LEaving some stack room (512-1K), that
leaves room for 9 256 byte buffers, and removing 1 256 buffer for
Channel 15 leaves 8 buffers for channels, more than the 1541.
> The reason I throw all these options out is that there could be a
> single board to meet a variety of needs. Basically a uC, an IEC port,
> a USB port, and a CF/IDE interface. Heck, throw in an SPI interface
> and you have support for SD cards as well. The trick is to find a
> single uC that supports USB, CF/IDE, SPI (and possibley MMC) with the
> lots of flash and ram (or at least support for external memory). Then
> just a PCB with the processor and stuffing options for all the
> connectors--possibly all of them--and we can do all the work on a
> single board.
Well, MMC/SD is easy, as SPI is most often available.
When i finish uIEC and VIP, I intend to roll a new board with the
AT90USB1287. My code will move nicely, and USB is built-in. The cost
is $1 more than the M128/M1281.
> Of course, one could always bit-bang the CF/IDE interface....
Which is what I am doing. PIO mode 1 is 3.3MB/sec, far faster than any
C64 variant I am aware of.
> The sd2iec looks interesting, as well as the 1541 Ultimate. Both of
> which are interesting projects in themselves as well.
The 1541 Ultimate is in a class by itself. But, we need to look at
pricing. sd2iec is dirt cheap (M32 and a MMC/SD port). $50-$75 is the
uIEC target. U1541 is $150 or so. With some work, I think uIEC and
sd2IEC/mmc2iec can share a codebase, as they are so much alike. That
would give the modder a tiny package, leaving cash for a single Ultimate
purchase.
> I notice that often there is mention of limitations on all of these
> cables because of memory and/or processing power. This is making me
> question the suitability of using a simple 8-bit microcontroller for
I suppose, but the M32/M128 have plenty of HP for MMC/SD/IDE/CF
purposes. USB might be a stretch, I don't know. Truly, though, if one
wants to go after more, I think you either need to go the 1541 ultimate
approach and make a 1541 compliant system, or go with a processor fast
enough to emulate the 1541 is SW. By the time you get there, you're
probably way over the pricing sweet spot.
If you're looking to solve your own problem, a larger unit is probably
best. But, if you're looking to re-use code and possibly share a
codebase, I'd highly recommend an AVR8 derivative with USB. External
RAM is easy to add for plenty of room, SPI is there, and you can re-use
the SD2IEC or uIEC IEC routines wihtout having to re-invent the wheel.
FatFs is already ready to go and is used in both projects, if that is of
interest.
For my part, I have been looking at the sd2iec codebase, to see where
there is overlap.
Jim
Thanks for the website link. Nice projects!
I used the ATmega2561 on a project for work (a drop-in replacement for
the ATmega128L we had in a previous project). Given that this is
doing all the IEC stuff pretty flawlessly, it sounds very doable.
> I was working on my own FAT16/32 LFN library, but I recently found FatFs
> which did all of the pieces I had yet to do (cluster chaining, etc.), so
> I added LFN support to FatFs and modified it so it can use a single 512
> byte buffer for all disk accesses. It works for IDE, CF, and MMC/SD,
> which is the target peripherals for uIEC.
We used the FlashFile code from Priio (see
https://www.priio.com/productcart/pc/viewCat_P.asp?idCategory=10) for
our last project. It doesn't support FAT32, but it was a commercial
product, and there is a fear of Microsoft patents on FAT32. It
doesn't support AVR GCC, but we used ImageCraft anyways for the
project. Heck of a lot easier than trying to write our own.
> FatFs requires 512 bytes for FS/FP buffer, 256 bytes for LFN name,
> leaving 3.25K free on M128. LEaving some stack room (512-1K), that
> leaves room for 9 256 byte buffers, and removing 1 256 buffer for
> Channel 15 leaves 8 buffers for channels, more than the 1541.
The FlashFile library we used was pretty light on the RAM end of
things, but then we configured it to only allow 2 open files at a
time. But it sounds like you have consumed about the same amount of
space.
The discussion of memory was more to address the nibbler aspect of
such a project. A parallel cable would require more RAM in order to
support reads of an entire track. Having the excess RAM also allows
for significantly more buffering reducing the overhead of the USB
interface--though latency may still be an issue.
> > The reason I throw all these options out is that there could be a
> > single board to meet a variety of needs. Basically a uC, an IEC port,
> > a USB port, and a CF/IDE interface. Heck, throw in an SPI interface
> > and you have support for SD cards as well. The trick is to find a
> > single uC that supports USB, CF/IDE, SPI (and possibley MMC) with the
> > lots of flash and ram (or at least support for external memory). Then
> > just a PCB with the processor and stuffing options for all the
> > connectors--possibly all of them--and we can do all the work on a
> > single board.
>
> Well, MMC/SD is easy, as SPI is most often available.
>
> When i finish uIEC and VIP, I intend to roll a new board with the
> AT90USB1287. My code will move nicely, and USB is built-in. The cost
> is $1 more than the M128/M1281.
That lines up well with what I am trying to do. Perhaps we can
coordinate on the development of such a board. The AT90USBKEY is a
great springboard for this development. Since all the GPIO ports are
available as through-hole connectors, a simple breadboard development
for the IEC interface could be done until a PCB is developed.
> > The sd2iec looks interesting, as well as the 1541 Ultimate. Both of
> > which are interesting projects in themselves as well.
>
> The 1541 Ultimate is in a class by itself. But, we need to look at
> pricing. sd2iec is dirt cheap (M32 and a MMC/SD port). $50-$75 is the
> uIEC target. U1541 is $150 or so. With some work, I think uIEC and
> sd2IEC/mmc2iec can share a codebase, as they are so much alike. That
> would give the modder a tiny package, leaving cash for a single Ultimate
> purchase.
$150 for the the U1541? I can't find a website for the development,
though I did find something here: http://commodore-gg.hobby.nl/innovatie_1541kaart_eng.htm
I can't figure out all the parts, but it appears to have an FPGA (and
an accompanying platform flash), an Intel flash part, and (perhaps)
some RAM/SDRAM. If I were to do this, I'd shy away from the Xilinx
parts, since they require separate configuration prom/flash part
(though the new BFI mode will make regular flash more usable). Actel
has flash based FPGA, and the Actel parts are much easier to generate
power supply rails for. The Xilinx parts all have difficult power
rail requirements, and strict power sequencing that makes design
difficult.
The expense has to come from being Xilinx based. Not just the part
cost ($38 for a Spartan3-1200!), but all the power regulation
requirements, extra platform flash, then the external flash and RAM/
SDRAM. I'm not sure why all the external devices, like the flash and
RAM/SDRAM, are required. Though I imagine they need code and data
memory, and there probably aren't enough block RAMs available with the
part they chose. Using a uC should be considerably less expensive,
and going with a faster part (say a 50+MHz 16- or 32-bit uC) could
accomplish the same. And RTL development takes a lot longer, and more
tools, than to do than firmware.
RTL development would get you closer to cycle accurate. And it allows
implementations of actual 1541 parts in RTL. That's a plus. But the
cost sure seems prohibitive.
> > I notice that often there is mention of limitations on all of these
> > cables because of memory and/or processing power. This is making me
> > question the suitability of using a simple 8-bit microcontroller for
>
> I suppose, but the M32/M128 have plenty of HP for MMC/SD/IDE/CF
> purposes. USB might be a stretch, I don't know. Truly, though, if one
> wants to go after more, I think you either need to go the 1541 ultimate
> approach and make a 1541 compliant system, or go with a processor fast
> enough to emulate the 1541 is SW. By the time you get there, you're
> probably way over the pricing sweet spot.
The UC3A runs at 66MHz and goes for about $15. The AT90USB1287 goes
for about $16. I think the UC3A could maybe handle the 1541
ultimate. And if you find a similar processor that doesn't have USB,
etc, it probably goes for less.
> If you're looking to solve your own problem, a larger unit is probably
> best. But, if you're looking to re-use code and possibly share a
> codebase, I'd highly recommend an AVR8 derivative with USB. External
> RAM is easy to add for plenty of room, SPI is there, and you can re-use
> the SD2IEC or uIEC IEC routines wihtout having to re-invent the wheel.
> FatFs is already ready to go and is used in both projects, if that is of
> interest.
If a big chunk of the code is already in C, then it should be easy to
port to any architecture. All that is needed is to write low-level
drivers. And running something like FreeRTOS, allows direct access to
the hardware, while running things like the USB driver in a separate
task. Much easier than trying to coordinate code blocks, especially
for something as timing critical as the IEC protocol.
>
> $150 for the the U1541? I can't find a website for the development,
> though I did find something here: http://commodore-gg.hobby.nl/innovatie_1541kaart_eng.htm
http://groups.google.com/group/comp.sys.cbm/browse_thread/thread/e88622f5ed6af619?hl=en#
> That lines up well with what I am trying to do. Perhaps we can
> coordinate on the development of such a board. The AT90USBKEY is a
> great springboard for this development. Since all the GPIO ports are
> available as through-hole connectors, a simple breadboard development
> for the IEC interface could be done until a PCB is developed.
That's fine. source for uIEC is available, although I've been working
hard over the past week to get it into shape to publish.
> I can't figure out all the parts, but it appears to have an FPGA (and
> an accompanying platform flash), an Intel flash part, and (perhaps)
> some RAM/SDRAM. If I were to do this, I'd shy away from the Xilinx
> parts, since they require separate configuration prom/flash part
> (though the new BFI mode will make regular flash more usable). Actel
> has flash based FPGA, and the Actel parts are much easier to generate
> power supply rails for. The Xilinx parts all have difficult power
> rail requirements, and strict power sequencing that makes design
> difficult.
I think Xilinx and Altera parts are used due to their free tools being
available. The Xilinx handles 6502/6522/drive electronics, while the
Flash and the RAM are the ROM/RAM of the 1541, I would believe. But,
FPGA is truly not my forte. I don't want to detract from the solution,
as it's truly a wonder, but it is a bit expensive.
sd2iec is probably at one end of the spectrum, while U1541 is at the
other.
> The UC3A runs at 66MHz and goes for about $15. The AT90USB1287 goes
> for about $16. I think the UC3A could maybe handle the 1541
> ultimate. And if you find a similar processor that doesn't have USB,
> etc, it probably goes for less.
I always thought a fast ARM core could emulate a 1MHz machine,
especially like a 1541/1571. Maybe that is the better mousetrap. Same
functionality as U1541, but lower price.
> If a big chunk of the code is already in C, then it should be easy to
FatFs and the DOS, yes. IEC routines, no. They are hand tuned for the
architecture.
> for something as timing critical as the IEC protocol.
Have a go at it.
Jim
Oh, I wasn't suggesting a replacement. It was meant more as
commentary on how it makes sense to get a library integrated, than
coding from scratch. I agree that the Priio option doesn't make sense
for a hobby project.
> Once the parallel interface is in place on the 1541, Mnib only requires
> a track's worth of RAM. 8kB of RAM would be large enough for a track
> buffer. Essentially, read a track, including all sync marks and others
> items, store to host, advance to next half track or next track, repeat
> process. Storing an entire disk in uC RAM is not needed.
In that case, a parallel cable solution needs 8K + firmware overhead.
So probably 16k or so. And the AT90USB only has up to 8K of internal
RAM. So in this case, it makes sense to make use of the external
memory interface. And Jameco has 32Kx8 SRAMs for as cheap as $2, and
they are cheaper than other sizes.
> > That lines up well with what I am trying to do. Perhaps we can
> > coordinate on the development of such a board. The AT90USBKEY is a
> > great springboard for this development. Since all the GPIO ports are
> > available as through-hole connectors, a simple breadboard development
> > for the IEC interface could be done until a PCB is developed.
>
> That's fine. source for uIEC is available, although I've been working
> hard over the past week to get it into shape to publish.
I'm going to focus on getting the USB interface up and running, and
perhaps do so simple digital I/O and see how that goes. Once I think
the USB is working pretty well, I'll ping you about perhaps future
development.
> > The UC3A runs at 66MHz and goes for about $15. The AT90USB1287 goes
> > for about $16. I think the UC3A could maybe handle the 1541
> > ultimate. And if you find a similar processor that doesn't have USB,
> > etc, it probably goes for less.
>
> I always thought a fast ARM core could emulate a 1MHz machine,
> especially like a 1541/1571. Maybe that is the better mousetrap. Same
> functionality as U1541, but lower price.
> > If a big chunk of the code is already in C, then it should be easy to
> FatFs and the DOS, yes. IEC routines, no. They are hand tuned for the
> architecture.
By hand-tuned, do you mean they are also dependent upon the platform,
not just the architecture? Are you dependent upon processor
frequency, RAM/flash access times, etc? If only tuned to the
processor architecture, then porting to the AT90USB should be pretty
painless.
> > for something as timing critical as the IEC protocol.
>
> Have a go at it.
Baby steps. :) That is way out there. Perhaps a future project.
Pete
Yes, external RAM would be very useful. The target would be a
track of G64 data, and G64 specifies 7928 bytes per track. 32K
would be a boon, in case someone wants to MNIB 8050/8250/SFD1001
disks, though I am not sure anyone would care.
> By hand-tuned, do you mean they are also dependent upon the
platform,
> not just the architecture? Are you dependent upon processor
> frequency, RAM/flash access times, etc? If only tuned to the
Mine are not so dependent on CLK freq, as they use timers, but I
can't say for sd2iec. As well, I factored in calling latency when
creating the timer values, so they would need to be tweaked for a
different frequency. Note that the standard IEC protocol is
frequency invariant, it's the speeder code I am talking about.
External RAM will impose some additional delay, but I think a
tweak is all that is needed.
> Baby steps. :) That is way out there. Perhaps a future
project.
Preach to the choir. It unnerves me at times for folks to ask why
I did PS/2 protocol or use RS232 for something when USB is here.
Every project starts somewhere, and if you bite off too much, you
never see success. Have a go with the 90USB and keep my email
handy.
Jim
If the sd2iec can get to the MMC/SD via the standard mode 1 SPI, why
not go for USB via an SPI-USB part?
Jim Brain schrieb:
> == Quote from Suudy (pet...@mudplace.org)'s article
>> In that case, a parallel cable solution needs 8K + firmware overhead.
>> So probably 16k or so. And the AT90USB only has up to 8K of internal
>> RAM. So in this case, it makes sense to make use of the external
>> memory interface. And Jameco has 32Kx8 SRAMs for as cheap as $2, and
>> they are cheaper than other sizes.
>
> Yes, external RAM would be very useful. The target would be a
> track of G64 data, and G64 specifies 7928 bytes per track.
Nope, G64 does not specify a specific track size. There
are fields reserved in the descriptor tables, where someone
can specify how many bytes should be reserved to hold actual
track data. When dumping VMax! you need a lot more space
than only these 7928 bytes. Nevertheless 8192 bytes should
be enough room to handle these protection reliably.
> 32K
> would be a boon, in case someone wants to MNIB 8050/8250/SFD1001
> disks, though I am not sure anyone would care.
My favorite solution would be to follow the Burstnibbler
routine, instead of dumping to a RAM buffer, directly
transfering over a highspeed connection onto the
destination cotnainer.
With 1 GCR byte to transfer every 25/26 microseconds,
this would result in a raw GCR data rate of 40 KiB/s.
Am I a bit overoptimistic, when I say that a Fullspeed
USB link should be able to handle this datarate for bulk
transfers ???
>> By hand-tuned, do you mean they are also dependent upon the platform,
>> not just the architecture? Are you dependent upon processor
>> frequency, RAM/flash access times, etc? If only tuned to the
>
> Mine are not so dependent on CLK freq, as they use timers, but I
> can't say for sd2iec. As well, I factored in calling latency when
> creating the timer values, so they would need to be tweaked for a
> different frequency. Note that the standard IEC protocol is
> frequency invariant, it's the speeder code I am talking about.
>
> External RAM will impose some additional delay, but I think a
> tweak is all that is needed.
>
>> Baby steps. :) That is way out there. Perhaps a future project.
>
> Preach to the choir. It unnerves me at times for folks to ask why
> I did PS/2 protocol or use RS232 for something when USB is here.
> Every project starts somewhere, and if you bite off too much, you
> never see success. Have a go with the 90USB and keep my email
> handy.
Well, same here... I got a small EZ-USB FX2 development
board over here and then the AT90USBKey development
board. But beside some blinkin' lights developments I did
not manage to create some really usable stuff :-(
Womo
The throughput is there. But I think the issue with USB is latency.
It takes a while for the transfer to start (I remember hearing
something in the ms range), but once going, it is fast. Given that
data comes off at a rate of 25uS, and assuming it takes 1ms for the
transfer to start, that is only 40 bytes of data to buffer. So every
1ms, 40 bytes are transferred. More about this below.
USB describes 4 types of transfers: control, interrupt, isochronous,
and bulk. Of those, interrupt transfers are maybe the best option.
Control is for control/status. Isochronous guarantees latency, but
does not guarantee delivery (packets can be dropped). And bulk
guarantees delivery but cannot guarantee latency.
But even interrupt endpoints depend upon the host polling the device
for interrupt status. This status can be specified in the endpoint
descriptor, but at best this is 125uS for high-speed devices, and up
to 1ms for low/full-speed devices. Now, if the host lives up to its
commitment, we wouldn't need a lot of buffering. But this now
requires coordinating the transfers from the drive with the USB
interface. Not necessarily difficult, but a lot easier to buffer a
full track, then blast it over using a bulk transfer.
A great website for this stuff is: http://www.beyondlogic.org/usbnutshell/
Pete
http://unusedino.de/ec64/technical/formats/g64.html
"At first, the defined track size value of 7928 bytes may seem to be
an arbitrary value, but it is not. It is determined by the fastest write
speed possible (speed zone 0), coupled with the average rotation speed
of the disk (300 rpm). After some math, the answer that actually
comes up is 7692 bytes. Why the discrepency between the actual size of
7692 and the defined size of 7928? Simply put, not all drives rotate at
300 rpm. Some can be faster or slower, so a upper safety margin of
+3% was built added, in case some disks rotate slower and can write
more data. After applying this safety factor, and some rounding-up,
7928 bytes per track was arrived at."
Your name is listed on the contrib line at the top. Is there a newer
version of the document?
Jim
http://groups.google.com/group/comp.sys.cbm/browse_thread/thread/f89c49bd15a019fe/
That's my write-up. While there may be a slightly newer version, the 7928
bytes only refers to images from the 1541 drive as those are the _only_
G64 files in existance (that I know of).
You also didn't read the next paragraph. "Also note that this upper limit
of 7928 bytes per track really only applies to 1541 (and compatible)
disks. If this format were applied to another disk type with more sectors
per track (like the SFD1001 or the 8050), this value would be higher."
---------------------------------------------------------------------------
Peter Schepers, | Author of : 64COPY, The C64 EMU file converter
Info Systems & Technology | http://www.64copy.com
University of Waterloo, | My opinion is not likely that of the
Waterloo, Ontario, Canada | University, its employees, or anybody
(519) 888-4567 ext 36347 | on this planet. Too bad!
Peter Schepers schrieb:
> In article <Nw4mj.314700$Fc.170799@attbi_s21>,
> Jim Brain <br...@jbrain.com> wrote:
>> Wolfgang Moser wrote:
>>> Nope, G64 does not specify a specific track size. There
>> Dunno, I took my information from here:
>>
>> http://unusedino.de/ec64/technical/formats/g64.html
>
> That's my write-up. While there may be a slightly newer version, the 7928
> bytes only refers to images from the 1541 drive as those are the _only_
> G64 files in existance (that I know of).
that 7928 bytes-per-track size was the result of some
theoretical calculations we made over here in c.s.c and
in private discussions, do you remember?
The 7928 value really relies on the fact that the soruce
disk was _recorded_ at a drive revolution speed of 300RPM.
As soon as that drive speed for the drive that is used to
record a disk varies, the resulting track size does vary
also.
Since V-Max! disks were recorded at drives spinning with
around 297,5RPM, the track size increases. WIthin the same
time frame (one disk revolution) at lot more byte can be
written to disk at a given bitrate. That's why you often
see more than 8000 Bytes per track, when dumping such
V-Max! disks.
Otherwise the track length descriptor field within the
G64 spec/dataformat would not be needed.
> You also didn't read the next paragraph. "Also note that this upper limit
> of 7928 bytes per track really only applies to 1541 (and compatible)
> disks. If this format were applied to another disk type with more sectors
> per track (like the SFD1001 or the 8050), this value would be higher."
Yes, to splice hair a bit, one may consider a V-Max! raw
track as 1541-incompatible track format in a way that you
need a very custom format routine to create such one.
However, it's a bit mindless to expect a G64 track's size
to be always of the size of 7928 bytes.
Womo
> That's my write-up. While there may be a slightly newer version, the 7928
> bytes only refers to images from the 1541 drive as those are the _only_
> G64 files in existance (that I know of).
I was not clear enough in my original note. G64 specifies 7928 bytes
per track *for 1541 images*. Since we were talking about a 1541
project, I didn't think the additional information was relevant.
> You also didn't read the next paragraph. "Also note that this upper limit
> of 7928 bytes per track really only applies to 1541 (and compatible)
> disks. If this format were applied to another disk type with more sectors
> per track (like the SFD1001 or the 8050), this value would be higher."
I read it, but as we are talking about 1541 images, it was redundant
information.
I guess if the goal here is to ensure everyone is acutely aware that G64
format itself does not specify track size in general and that one could
use the G64 image for other drive types with other track densities, that
is fine. However, as a practical matter, G64 1541 images are the only
ones I have ever seen, and they are defined to be 7928 bytes (as per the
writeup), so the OP should plan for at least that much RAM.
Jim
Yes, but that's not what Jim was asking about. From what I gathered, he
was wondering why the value appeared to be set at 7928, like all G64
images must use this value, when that isn't the case. It also seemed as
though he didn't read the paragraph following the one he quoted which
explains more about the value.
>The 7928 value really relies on the fact that the soruce
>disk was _recorded_ at a drive revolution speed of 300RPM.
>As soon as that drive speed for the drive that is used to
>record a disk varies, the resulting track size does vary
>also.
>
>Otherwise the track length descriptor field within the
>G64 spec/dataformat would not be needed.
Of course is needs to be there, we live in a real world not perfect. Not
all 1541 disks adhere to the theoretical max of 7928 and G64 doesn't limit
itself to 1541 drives. Another drive would have a different maximum track
size.
>> You also didn't read the next paragraph. "Also note that this upper limit
>> of 7928 bytes per track really only applies to 1541 (and compatible)
>> disks. If this format were applied to another disk type with more sectors
>> per track (like the SFD1001 or the 8050), this value would be higher."
>
>Yes, to splice hair a bit, one may consider a V-Max! raw
>track as 1541-incompatible track format in a way that you
>need a very custom format routine to create such one.
Maybe this needs to be mentioned in the docs. Spruce up the explanation of
the track size, and how _some_ custom speeders can influence it as well as
other drive models.
PS.
> Since V-Max! disks were recorded at drives spinning with
> around 297,5RPM, the track size increases. WIthin the same
> time frame (one disk revolution) at lot more byte can be
> written to disk at a given bitrate. That's why you often
> see more than 8000 Bytes per track, when dumping such
> V-Max! disks.
It sounds like the doc needs to be updated.
Hows this sound for the paragraph you quoted:
"Second, the maximum track size is not a fixed value but depends on the
type of disk that is contained with the G64. Non-1541 images such as
SFD1001 or 8050 could make the track size larger or smaller. For 1541
images, which are about the only ones in existance, the typical track size
is 7928 bytes. This value is determined by the fastest write speed
possible (speed zone 0), coupled with the average rotation speed of the
disk (300 rpm), and assuming normal Commodore GCR data formatting. After
some math, the answer that actually comes up is 7692 bytes. Taking into
account a rotation speed safety adjustment of -3%, which would allow more
data to be written, and some rounding, 7928 bytes per track was arrived
at. Note that imaging non-standard GCR disks such as V-MAX can result in
GCR tracks over 8000 bytes, but these are rare."
Does this make things better? I've also made a few other smaller changes.
PS
Keep in mind that the current implementation of VICE only works with
7928 byte tracks. You can specify in the header another size, but it is
ignored and will crash when mounting the image. I believe CCS and HOXS
parse the header correctly, so they will use larger or small sized
tracks as defined.
--
-
Pete Rittwage
http://rittwage.com
C64 Preservation Project
http://c64preservation.com
OK, I've updated G64, and a few other formats, on my web site
http://ist.uwaterloo.ca/~schepers/formats.html
Anything dated past May 2007 is a new version. I've also bundled the
entire updated collection on the downloads page in formats.zip.
I find it interesting that the knowledge about V-MAX, and possibly other,
oversize tracks was known to some, and the G64 writeup has been around for
several years, but no one contested the description of the track size
value until now. The way the original discussions went did make it sound
like the 7928 byte count was fixed on the 1541 so the writeup was done
accordingly. At least that fallacy is corrected now.
PS.
> http://ist.uwaterloo.ca/~schepers/formats.html
Thanks for the link. I've been thinking about image formats for non
1541 disks, and I had no idea CMD Native Partitions and such were
already supported.
It looks like 1 codebase could be used for D64,D71,D80,D82 formats, but
D81 needs a special handler and the CMD one is a superset.
If one wanted to emulate an entire CMD HD Native Partition, would D2M
DNP be extendable to support a 16MB partition?
Jim
DNP is for the 16Mb partitions. D2M is a container for DNP and emulated
partitions.
PS.
Peter Schepers schrieb:
> In article <t27mj.47$yE1.3@attbi_s21>, Jim Brain <br...@jbrain.com> wrote:
>> Wolfgang Moser wrote:
>>
>>> Since V-Max! disks were recorded at drives spinning with
>>> around 297,5RPM, the track size increases. WIthin the same
>>> time frame (one disk revolution) at lot more byte can be
>>> written to disk at a given bitrate. That's why you often
>>> see more than 8000 Bytes per track, when dumping such
>>> V-Max! disks.
>> It sounds like the doc needs to be updated.
not really, since the doc mentions:
000A-000B: Maximum track size in bytes in LO/HI format
sooo, if one wants, he could make a track size of 65535
bytes within the image. If this would make sense ;-)
> Hows this sound for the paragraph you quoted:
>
> "Second, the maximum track size is not a fixed value but depends on the
> type of disk that is contained with the G64. Non-1541 images such as
> SFD1001 or 8050 could make the track size larger or smaller. For 1541
> images, which are about the only ones in existance, the typical track size
> is 7928 bytes. This value is determined by the fastest write speed
> possible (speed zone 0), coupled with the average rotation speed of the
> disk (300 rpm), and assuming normal Commodore GCR data formatting. After
> some math, the answer that actually comes up is 7692 bytes. Taking into
> account a rotation speed safety adjustment of -3%, which would allow more
> data to be written, and some rounding, 7928 bytes per track was arrived
> at. Note that imaging non-standard GCR disks such as V-MAX can result in
> GCR tracks over 8000 bytes, but these are rare."
>
> Does this make things better? I've also made a few other smaller changes.
I think it make things a bit betterr since now every
reader does clearly recognize that the track size of
non-fixed nature.
Womo
This option is intriguing as well. Using something like the ATmega
with this SPI interface could provide for a cheaper solution. If you
can get away with a bare-bones ATmega part coupled with this Maxim
part, could make for a much simpler system.
However, the package is QFP, which stinks for soldering. But so is
the AT90USB and the UC3A/UC3B. And since we have to solder a surface
mount part anyways, going with less parts would be better. And since
the AT90USB runs about the same price as an ATmega (some of them), I
think this is a better option.
Just my $0.02.
Pete
When I re-read the original doc, as Jim saw it, I realized that except for
some minor mention that the size really isn't fixed, it appeared to be
fixed for 1541 at 7928.
This maximum track size value is causing me some grief as I'm trying to
understand why it is necessary. I can see it would be useful for a program
to know what the max size of _any_ track is when working in a G64. I can
see that you would need to set aside enough space in the file so that you
can safely modify a G64 track and not worry about wrapping around over
other data if the track's too small.
But could you not have simply told people to use the track size value
stored at the beginning of the track data stream?
Also, since you know that V-MAX tracks can exceed this value, should or
could we not change the 1541 default up a bit, say 8100 bytes? Surely this
would encompass all possible track sizes?
How does MNIB create it's G64's? Does it use the hard-coded 7928 byte
track size as well?
>> Does this make things better? I've also made a few other smaller changes.
>
>I think it make things a bit betterr since now every
>reader does clearly recognize that the track size of
>non-fixed nature.
Well, I re-worked it many more times since I posted it, so it would be
best if you read the doc again:
http://ist.uwaterloo.ca/~schepers/formats/G64.TXT
PS.
As always, I'm really looking in terms of the User Port of a real C64,
and am still intrigued by the possibility of a User Port <-> SPI
interface. There is very cheap flash RAM that runs on SPI, SD/MMC of
course, and the MAX RS232 UART and USB parts. Permanent flash memory,
portable flash memory, fast serial, and USB ... and the cartridge port
is still clear.
> When I re-read the original doc, as Jim saw it, I realized that except for
> some minor mention that the size really isn't fixed, it appeared to be
> fixed for 1541 at 7928.
I can vouch that is what I understood. Reasons:
o It did make a minor mention about a flexible format and flexible track
lengths, but then spent a great deal of time talking about a maximum
track size, including a sub discussion on the 3% margin added.
o If the track length was truly changeable within a drive family, I
wondered why all of the participants spent so much time figuring out how
much length was needed for a 300RPM drive writing at the fastest rate on
the longest track. It seemed like, if the number could change per X64
image, the image maker would simply dump the track data to a temp data
store, keep a tally on the maximum number of bytes read per track, and
use that value in the track-length field. That would always be correct
and would not need to worry about tacking on an experimental 3% margin.
> This maximum track size value is causing me some grief as I'm trying to
> understand why it is necessary. I can see it would be useful for a program
> to know what the max size of _any_ track is when working in a G64. I can
> see that you would need to set aside enough space in the file so that you
> can safely modify a G64 track and not worry about wrapping around over
> other data if the track's too small.
>
> But could you not have simply told people to use the track size value
> stored at the beginning of the track data stream?
Or, put the track length as part of the record pointers at the beginning:
ERRLO,ERRHI,DATALO,DATAHI,LENLO,LENHI
> Also, since you know that V-MAX tracks can exceed this value, should or
> could we not change the 1541 default up a bit, say 8100 bytes? Surely this
> would encompass all possible track sizes?
That was my question as well. If a bunch of really smart people got
their heads together to figured out 7928 (which sounds very specific,
unlike 8000, or 8192), then it must truly be an upper bound. If you
decide to bump it up, I vote for 8192, which would be eaiser to handle
math-wise.
Peter Schepers schrieb:
> Wolfgang Moser <wn0...@d81.de.invalid> wrote:
>> not really, since the doc mentions:
>>
>> 000A-000B: Maximum track size in bytes in LO/HI format
>>
>> sooo, if one wants, he could make a track size of 65535
>> bytes within the image. If this would make sense ;-)
>
> When I re-read the original doc, as Jim saw it, I realized that except for
> some minor mention that the size really isn't fixed, it appeared to be
> fixed for 1541 at 7928.
well, for me, as I was into the details beginning from
the first discussions about a GCR format back in the
late 1990'ties, I really seem to have difficulties to
understand the recipient's understanding of the G64.txt
doc.
> This maximum track size value is causing me some grief as I'm trying to
> understand why it is necessary. I can see it would be useful for a program
> to know what the max size of _any_ track is when working in a G64. I can
> see that you would need to set aside enough space in the file so that you
> can safely modify a G64 track and not worry about wrapping around over
> other data if the track's too small.
Indeed it is really useful for a certain software
implementation of a G64 image reader. If someone wants
to allocate a two dimensional _array_ of tracks instead
of successively allocating a list of vectors of tracks,
he would need that maximum size value to correctly
declare dimensions (number of track stored in image and
maximum track size).
This saves for a two-step parsing of all the descriptors
of the file to find the max value. Maybe there was
another reason once, but for me it's simply some useful
redundancy.
> But could you not have simply told people to use the track size value
> stored at the beginning of the track data stream?
>
> Also, since you know that V-MAX tracks can exceed this value, should or
> could we not change the 1541 default up a bit, say 8100 bytes? Surely this
> would encompass all possible track sizes?
You don't want to mess up with the
don't-waste-my-bytes-people, do you? Once there were
big discussions, if the G64 format shoudl include
compression, single compressed tracks, smart GCR
specific encoding/compression techniques to save as
much space as possible.
> How does MNIB create it's G64's? Does it use the hard-coded 7928 byte
> track size as well?
>
>
>>> Does this make things better? I've also made a few other smaller changes.
>> I think it make things a bit betterr since now every
>> reader does clearly recognize that the track size of
>> non-fixed nature.
>
> Well, I re-worked it many more times since I posted it, so it would be
> best if you read the doc again:
>
> http://ist.uwaterloo.ca/~schepers/formats/G64.TXT
Peter, I don't know, if this sections is new, you
mention some tables with track sizes and MNib track
sizes. The is one column named "Tail GAP".
What do even and odd sectors mean here?
I never seen a disk where _every_ inter-sector GAP
is at 19 bytes, when every even inter-sector GAP is
at only 9 bytes.
In fact every inter-sector GAP is at 9 bytes, but
for the last sector, which I call the track tail GAP.
Maybe you want to have a deeper look into the MNib
dump source to verify that.
Womo
script:
>well, for me, as I was into the details
>beginning from the first discussions
>about a GCR format back in the late
>1990'ties, I really seem to have
>difficulties to understand the recipient's >understanding of the
G64.txt doc.
Dat's kuz jur Tschoiman, WoMo! ((-;
salaam,
dowcom
PS: I have had instances that I thought were 'perfectly' clear, and
later realized that my knowledge of the subject 'filled in the gaps'.
bud
To e-mail me, add the character zero to "dowcom". i.e.:
dowcom(zero)(at)webtv(dot)net.
--
http://community.webtv.net/dowcom/DOWCOMSAMSTRADGUIDE
MSWindows is television,… Linux is radar.
Minor aside: Actel also makes a version of its tools available for
free download and use. (AFAIK, it comes with Modelsim, Synplify, and
supports programming up to the mid-size Flash devices.)
c.f.,
http://www.actel.com/techdocs/litrequest/default.aspx
http://www.actel.com/products/software/libero/licensing.aspx
It's hard to make yourself an objective observer when it was subjective.
Since I wasn't involved in G64's creation I can try to be objective. even
for me it's not always easy.
>> This maximum track size value is causing me some grief as I'm trying to
>> understand why it is necessary. I can see it would be useful for a program
>> to know what the max size of _any_ track is when working in a G64. I can
>> see that you would need to set aside enough space in the file so that you
>> can safely modify a G64 track and not worry about wrapping around over
>> other data if the track's too small.
>
>Indeed it is really useful for a certain software
>implementation of a G64 image reader. If someone wants
>to allocate a two dimensional _array_ of tracks instead
>of successively allocating a list of vectors of tracks,
>he would need that maximum size value to correctly
>declare dimensions (number of track stored in image and
>maximum track size).
Now who in their right mind would hold the image in memory? In reality,
all you need to do is read the real track size preceeding each track data
stream, dynamically allocate that + small overrun buffer and you should be
fine. Like a disk drive, you are only working on a single track at a time.
You allocate memory for only one track, only when you read it, and only
the size you need.
>This saves for a two-step parsing of all the descriptors
>of the file to find the max value. Maybe there was
>another reason once, but for me it's simply some useful
>redundancy.
Why do you need to know the maximum? You should only need to know what you
are working with at that specific track. That's the point I am trying to
make. If indeed the 7928 is only a "glass ceiling" then either not use it
or bump it up a bit more to be all inclusive.
Obviously the VICE authors thought it was a hard value as someone said
images with values other than 7928 won't work. This means either their
understanding was flawed or the write-up I provided wasn't clear enough
and, like Jim, they thought it was indeed the "max" value.
>> But could you not have simply told people to use the track size value
>> stored at the beginning of the track data stream?
>>
>> Also, since you know that V-MAX tracks can exceed this value, should or
>> could we not change the 1541 default up a bit, say 8100 bytes? Surely this
>> would encompass all possible track sizes?
>
>You don't want to mess up with the
>don't-waste-my-bytes-people, do you? Once there were
>big discussions, if the G64 format shoudl include
>compression, single compressed tracks, smart GCR
>specific encoding/compression techniques to save as
>much space as possible.
Silly argument especially when G64 is not prevalent enough to worry about
unused space. I couldn't care less about those that complain about wasted
space... bring em on! Compression would only make dealing with G64,
especially interactive read/write, very difficult.
Since you brought it up, I find the max value defined _creates_ much
wasted space as most disks never reach that maximum so you end up with a
lot of extra padding bytes for each track, especially the higher tracks.
Why not deal with that loss?
>Peter, I don't know, if this sections is new, you
>mention some tables with track sizes and MNib track
>sizes. The is one column named "Tail GAP".
No, it's been there since the earliest writeups.
>What do even and odd sectors mean here?
>I never seen a disk where _every_ inter-sector GAP
>is at 19 bytes, when every even inter-sector GAP is
>at only 9 bytes.
>In fact every inter-sector GAP is at 9 bytes, but
>for the last sector, which I call the track tail GAP.
>
>Maybe you want to have a deeper look into the MNib
>dump source to verify that.
Nope. This, if I remember correctly, is taken from two sources: Inside
Commodore DOS and using MNIB on my original factory disks several years
ago while decoding F64. I'm not going to setup the Commodore equipment
just to re-verify what I originally found and wrote up.
PS.
I forgot to address this. The header max track size is not a redundant
value. The header value of 7928 is fixed and not representative of the
real tracks stored in the image. The individual track sizes are virtually
always less than the max. Thus my argument against the very need of the
header value. It seems specious at best.
PS.
That was definitely considered when I mentioned Actel. At work, we
use Xilinx based parts for a couple of reasons. First, they have the
best densities available. Actel doesn't even come close when talking
about the number of available gates. The Xilinx support, in the form
of field reps, is incredible. We have had amazing support with the
technical issues over the years. And it is hard to justify switching
to an unknown. And the worst reason--it has been what we have been
using for years.
I suspect that is part of the decision on the 1541 Ultimate. It is
what they are used to. Personally, if I were to use an FPGA in my
project, I'd go with an Actel part. The tools are free, they have 1
or 2 power supplies (instead of the 3 sometimes 4 on parts--3V3, 2V5,
and 1V2 on my current work project).
> Now who in their right mind would hold the image in memory?
>
> […]
>
>>You don't want to mess up with the
>>don't-waste-my-bytes-people, do you? Once there were
>>big discussions, if the G64 format shoudl include
>>compression, single compressed tracks, smart GCR
>>specific encoding/compression techniques to save as
>>much space as possible.
>
> Silly argument especially when G64 is not prevalent enough to worry about
> unused space. I couldn't care less about those that complain about wasted
> space... bring em on! Compression would only make dealing with G64,
> especially interactiv
You think in the age of 2 GiB address spaces no one in their right mind
would hold the image in memory but it's no problem to waste space on
disk!? :-)
Ciao,
Marc 'BlackJack' Rintsch
> Personally, if I were to use an FPGA in my
> project, I'd go with anActelpart. The tools are free, they have 1
> or 2 power supplies (instead of the 3 sometimes 4 on parts--3V3, 2V5,
> and 1V2 on my current work project).
cool, good to hear. :)
> That was definitely considered when I mentionedActel. At work, we
> use Xilinx based parts for a couple of reasons. First, they have the
> best densities available. Acteldoesn't even come close when talking
> about the number of available gates.
True enough, the Flash-based parts that Actel offers are largely
targeted toward small-to-medium-sized applications, whereas Xilinx and
Altera typically compete in the ultra-large FPGA segment. I think
that Actel's major strengths (relative to Xilinx or Altera) lie in
terms of low-power, low-cost, and system-on-chip integration (e.g.,
the Igloo and Fusion devices, coupled with the freely-available Arm
cores and so forth).
At any rate, please keep us in mind in the future; our FAEs and
support teams are happy to field any questions that you may have.
regards,
Kris Vorwerk
Staff Software Engineer
Physical Design, Actel Corp.
Are there any minimal-Forth-processor-on-chip soft cores for Actel
parts?
Oh, and 65C02 machine code cores, or is that too big?
However, the ARM core is a soft-core. The advantage of the Virtex 4
with a hard PowerPC is that we don't have to waste fabric resources on
the processor. And timing (at least in our Microblaze systems) is
always a problem in the processor itself--rarely in the peripherals.
I don't know how well the ARM performs, but if the Microblaze is any
guide, that could be a problem as well.
What system-on-chip integration in relation to the Igloo and Fusion
are you referencing?
> At any rate, please keep us in mind in the future; our FAEs and
> support teams are happy to field any questions that you may have.
For a personal project, heck yeah. However, for work, we require the
densities that Xilinx provides. When Actel can provide us with DSP48-
like logic and megabits of on-chip RAM, I'd be all over it. We
especially like the flash nature of the part, which makes soft error
nearly moot. Low power is a benefit as well, though that is rarely a
concern on our FPGA designs.
Are the FAE's available for support on hobby projects? :)
But in reality, if the support was there for a hobby project, it could
make it much easier to convince the powers-that-be that Actel is the
way to go for our production based designs.
Pete
I haven't seen any. But there is no reason that a Forth kernel
couldn't be integrated and run on the available ARM or 8051 cores.
> Oh, and 65C02 machine code cores, or is that too big?
Too big? I don't think size is an issue. The issue is the
availability of cores. The Actel site doesn't list any 65xx cores.
It only has ARM, 8051, and the already public LEON SPARC core. And if
the parts can fit the LEON or ARM, it certainly can handle a 65C02.
Though Opencores has a core that supports the 6502, 65C02, and 65C816
cores. It claims to be cycle accurate with the 6502, but incomplete
for 65C02. It is a micro-coded state machine, so it requires some
internal ROM for the microcode. It doesn't look overly complex, and
eliminating the 16450 that is included could shrink it even more. See
http://www.opencores.org/projects.cgi/web/t65/overview
Also, Sierra Circuit provides a 65C02 core (in addition to a bunch of
the uP cores). I don't know what they charge, but it may be trivial
for a pre-synthesized netlist, rather than a source-code license.
Check out http://www.sierracircuit.com/
Pete
(First, I want to offer a disclaimer to my previous message : my
previous comments were my own opinions; they do not represent
corporate opinions or positions. As an individual -- and also a user
of Actel parts -- I like to help others whenever possible, and my
comments here are entirely my own.)
> What system-on-chip integration in relation to the Igloo and Fusion
> are you referencing?
I was speaking about Fusion generally, in the sense that it is a mixed-
signal FPGA, integrating configurable analog, large flash memory
blocks, clock generation and management circuitry, etc.
regards,
Kris