Ran out of data on sector index 32, try reading with --begin_time 537xxx

36 views
Skip to first unread message

Convergent MightyFrame

unread,
Dec 15, 2020, 5:16:45 AM12/15/20
to MFM Discuss
Trying a different disk this time, after abandoning the last drive I talked about here, and doing some successful reads and emulations between then and now...

Now working with a Computer Memories, Inc Model CM5410-C Drive.
Doing an analyze looks quite successful:

root@beaglebone:~/mfm# ./mfm_read -v
Board revision C detected
Version 2.25
Drive must be between 1 and 4
root@beaglebone:~/mfm# ./mfm_read -a
Board revision C detected
Found drive at select 2
Returning to track 0
Drive RPM 3602.0
Matches count 63 for controller CONVERGENT_AWS
Header CRC: Polynomial 0x1021 length 16 initial value 0x0
Sector length 256
Data CRC: Polynomial 0x1021 length 16 initial value 0x0
Selected head 8 found 0, last good head found 3
Read errors trying to determine sector numbering, results may be in error
Number of heads 4 number of sectors 32 first sector 1
Interleave (not checked): 14 27 8 21 2 15 28 9 22 3 16 29 10 23 4 17 30 11 24 5 18 31 12 25 6 19 32 13 26 7 20 1
Drive supports buffered seeks (ST412)
Found cylinder 255 expected 256
Found cylinder 255 expected 257
Stopping end of disk search due to mismatching cylinder count
Number of cylinders 256, 8.4 MB

Command line to read disk:
--format CONVERGENT_AWS --sectors 32,1 --heads 4 --cylinders 256 --header_crc 0x0,0x1021,16,0 --data_crc  0x0,0x1021,16,0 --sector_length 256 --retries 50,4 --drive 2  --begin_time 460000


However, when I try to do a raw transitions file read, I get this on every single cylinder and every single head over and over again, until 41.  

Retries failed cyl 41 head 2
Ran out of data on sector index 32, try reading with --begin_time 537000
Bad sectors on cylinder 41 head 2: 1
Ran out of data on sector index 32, try reading with --begin_time 537000
Ran out of data on sector index 32, try reading with --begin_time 536000
Ran out of data on sector index 32, try reading with --begin_time 536000
Ran out of data on sector index 32, try reading with --begin_time 537000
Ran out of data on sector index 32, try reading with --begin_time 536000
Ran out of data on sector index 32, try reading with --begin_time 537000
Ran out of data on sector index 32, try reading with --begin_time 537000
Ran out of data on sector index 32, try reading with --begin_time 537000
Ran out of data on sector index 32, try reading with --begin_time 536000
Ran out of data on sector index 32, try reading with --begin_time 536000
Ran out of data on sector index 32, try reading with --begin_time 537000
Ran out of data on sector index 32, try reading with --begin_time 537000
Ran out of data on sector index 32, try reading with --begin_time 537000
Ran out of data on sector index 32, try reading with --begin_time 536000
Ran out of data on sector index 32, try reading with --begin_time 537000
Ran out of data on sector index 32, try reading with --begin_time 537000
Ran out of data on sector index 32, try reading with --begin_time 537000
Ran out of data on sector index 32, try reading with --begin_time 536000
Ran out of data on sector index 32, try reading with --begin_time 537000
Ran out of data on sector index 32, try reading with --begin_time 537000
Ran out of data on sector index 32, try reading with --begin_time 537000
Failed to write word to transition file 16285

Then I guess it finally gives up on cyl 41 head 2 I guess...

So, since it is perfectly consistent on every single cylinder & head, is it possible that the drive was just formatted so that there's nothing past "sector index 32"?  And really the drive is fine, but the reader program is expecting a more factory-standard format/use than what we have?

The drive is out of a Convergent Technologies AWS.  I've never had one of these this old before, so this is a first for me.  And I would guess for anyone, as I've never seen anyone try to restore any of the CT AWS/IWS machines.  Only NGENs which were the upgrade to the AWS.

I've uploaded the raw transitions file that was created by this process, in case analysis of it is helpful.  It can be downloaded here:


Thoughts & Ideas?

Thanks, David & Everyone!
Best
AJ

David Gesswein

unread,
Dec 15, 2020, 7:05:31 AM12/15/20
to mfm-d...@googlegroups.com
Try adding --begin_time 537000 to the mfm_read command and see how it does. The reader can't
work if the index pulse occurs in the sector data. Begin_time makes a virtual index pulse at
the specified delay.

This is you ran out of disk space on the BeagleBone.
Failed to write word to transition file 16285


> --
> You received this message because you are subscribed to the Google Groups "MFM Discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to mfm-discuss...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/mfm-discuss/a29c5d2e-8e69-419c-ba4f-8f732dd96a34n%40googlegroups.com.

Convergent MightyFrame

unread,
Dec 18, 2020, 3:02:18 AM12/18/20
to MFM Discuss
David, as always, I thank you.  And, as usual, your suggestion works!

In retrospect, I think your answer could have been obvious to me.  But it was all in how I was reading the error message text.

Ran out of data on sector index 32, try reading with --begin_time 537000 

So, I was reading this as it WAS Trying reading with begin time 537000, BUT, instead this message was TELLING ME to try re-running the read command with the additional parameter of  --begin_time 537000.  Of course!  I was reading "try reading with --begin_time 537000" as a report of what WAS happening, not as a suggestion to me on what to do next.

OK, now that that's out of the way, let me say that I *think* it worked...I read the same disk with no errors.  And on to new adventures with this fantastic device and more "Forgotten Machines" to experiment on with it.

I'll start a new thread with my next adventure/question.

Thanks again for making this all possible, David & All!

Best,
AJ

A M

unread,
Dec 8, 2022, 4:04:42 AM12/8/22
to MFM Discuss
Glad I saw this... I was getting a similar message and didn't realize it was telling *me* to change the setting... I wish it were a little clearer in the documentation, but I did finally find it here and got a good clean backup.

Thanks!

--
Aaron

David Gesswein

unread,
Dec 8, 2022, 10:18:30 AM12/8/22
to mfm-d...@googlegroups.com
I've changed the error message to
Ran out of data on sector index 17, try adding --begin_time 243000 to mfm_read command line

Is that clearer?

Will be in next release of software.
> To view this discussion on the web visit https://groups.google.com/d/msgid/mfm-discuss/e0abae7e-5260-4e73-8967-5e3a51ff22dcn%40googlegroups.com.

A M

unread,
Dec 8, 2022, 11:46:12 AM12/8/22
to MFM Discuss
Yeah, I think that's clearer.

Thanks!

--
Aaron
Reply all
Reply to author
Forward
0 new messages