Trackback Error running rb_file.py on BBB, fresh installation

118 views
Skip to first unread message

derkin...@gmail.com

unread,
Apr 26, 2019, 7:35:21 AM4/26/19
to BeagleBoard
I am currently trying to run the example rb_file .py however whenever I do I get a traceback error:



bbb_error.JPG



I am not quite sure how to fix this error, however I went into the file (pruio.py) and located the lines that it stems from:

bbb_error2.JPG


I'm pretty new to this library in general and as such don't know much about how to fix this kind of stuff, so any help would be greatly appreciated!


TJF

unread,
Apr 26, 2019, 12:32:29 PM4/26/19
to BeagleBoard
Upps. This is an auto-generated bug in version 0.6.4c. Thanks for reproting.

Just delete the 3 lines around PruReady. You wont need them. Then save the file.

Sorry for your trouble.


derkin...@gmail.com

unread,
Apr 27, 2019, 7:05:59 PM4/27/19
to BeagleBoard
Thanks for the reply that solved my problem, however, I have a new one. Running the rb_file.py now throws the error:

BBBnewError.JPG


Any idea what could be causing this? I have not made any modifications to this file. And I have the three suggested libraries installed on the latest 0.6.4c version.  (python-pruio libpruio-lkm libpruio-doc).

TJF

unread,
Apr 27, 2019, 11:50:57 PM4/27/19
to BeagleBoard
Your system is not prepared for libpruio. Either the kernel driver uio_pruss doesn't load at all, or the driver cannot create the files /dev/uio[0-7] due to miss-configuration. For further help I'd need the output from
lsmod | grep uio
ls
-l /dev/uio*
ls -l /sys/devices/platform/libpruio
sudo /opt/scripts/tools/version.sh

Regards

derkin...@gmail.com

unread,
Apr 28, 2019, 9:54:42 PM4/28/19
to BeagleBoard
Here are my outputs from each command you gave (in order):

BBB1.JPG

bbb2.JPG

bbb3.JPG

bbb4.JPG

Let me know if you need any other information from me!

Jim F

unread,
Apr 29, 2019, 12:58:37 AM4/29/19
to beagl...@googlegroups.com
Sir,

I am not contributing much to this conversation but wish to offer a suggestion. For the sake of searchability in the future, and readability now, it is usually better to copy and paste the text directly into your email rather than using screen shots.

On some questions you might get more help that way. 

Best regards, 

Jim

--
For more options, visit http://beagleboard.org/discuss
---
You received this message because you are subscribed to the Google Groups "BeagleBoard" group.
To unsubscribe from this group and stop receiving emails from it, send an email to beagleboard...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/beagleboard/f0ac4d39-2edf-4a15-a274-591857093277%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

TJF

unread,
Apr 29, 2019, 3:34:11 AM4/29/19
to BeagleBoard
@Jim F: Thank's for the helpfull hint!


Comment the RPROC line in file /boot/uEnv.txt. It should look like

#uboot_overlay_pru=/lib/firmware/AM335X-PRU-RPROC-4-14-TI-00A0.dtbo
uboot_overlay_pru
=/lib/firmware/AM335X-PRU-UIO-00A0.dtbo


After saving and rebooting, the command ls -l /dev/uio* should output

crw-rw---- 1 root users 243, 0 Apr 29 06:42 /dev/uio0
crw-rw---- 1 root users 243, 1 Apr 29 06:42 /dev/uio1
crw-rw---- 1 root users 243, 2 Apr 29 06:42 /dev/uio2
crw-rw---- 1 root users 243, 3 Apr 29 06:42 /dev/uio3
crw-rw---- 1 root users 243, 4 Apr 29 06:42 /dev/uio4
crw-rw---- 1 root users 243, 5 Apr 29 06:42 /dev/uio5
crw-rw---- 1 root users 243, 6 Apr 29 06:42 /dev/uio6
crw-rw---- 1 root users 243, 7 Apr 29 06:42 /dev/uio7


Regards

Sean Landerkin

unread,
Apr 29, 2019, 1:23:25 PM4/29/19
to BeagleBoard
Thanks for the tip on text vs pictures, I will definitely take that in mind before I make new posts. That fixed my issue and I am now able to run the file, thank you so much for your help. One last question I have is how do I read the data that is written in the output.x file that is written by rb_file.py. Since its raw data I cant sudo nano the files and I am not quite sure how to convert it to something like a .txt or csv file that I can then parse through. Any suggestions on how I would do this? 

TJF

unread,
Apr 29, 2019, 3:18:27 PM4/29/19
to BeagleBoard
Hi Sean!

Am Montag, 29. April 2019 19:23:25 UTC+2 schrieb Sean Landerkin:
... how do I read the data that is written in the output.x file that is written by rb_file.py. Since its raw data I cant sudo nano the files and I am not quite sure how to convert it to something like a .txt or csv file that I can then parse through. Any suggestions on how I would do this?

Never use sudo during development. Instead add yourself (your user ID) to the group pruio, and work from user space.

Regarding reading the data I cannot help. I just learned a bit Python for coding the examples, but never used it again. The data get written by libc.fwrite(), so checking function libc.fread() may be a good start. I guess creating a size matching UInt16 array and reading in the data will do.

Regards

Charles Chao

unread,
Apr 29, 2019, 8:36:35 PM4/29/19
to BeagleBoard
Hi Jim,

"Never use sudo during development. Instead add yourself (your user ID) to the group pruio, and work from user space."

May I ask why?and how?

Charles

TJF於 2019年4月30日星期二 UTC+8上午3時18分27秒寫道:

Sean Landerkin

unread,
Apr 29, 2019, 9:20:45 PM4/29/19
to BeagleBoard
Okay, do you understand how the data is written in C, and how in C I could convert the data to readable data? I think I understand how to transport that from C to python but I don't understand the initial translation from the written form to a human-readable form.  

TJF

unread,
Apr 30, 2019, 2:22:46 AM4/30/19
to BeagleBoard
Hi Charles!

Am Dienstag, 30. April 2019 02:36:35 UTC+2 schrieb Charles Chao:
May I ask why?

During development bugs may lead in mal-functions. The risc of damaging the system is mutch smaller in user space. And when you write some data, you have to spend much of work in managing the file access, in order to read the data from user space. It's safer, more easy and faster to develop in user space.
 
and how?

11 hours ago, TJF wrote:
Instead add yourself (your user ID) to the group pruio, and work from user space.

Regards

TJF

unread,
Apr 30, 2019, 2:25:17 AM4/30/19
to BeagleBoard
Am Dienstag, 30. April 2019 03:20:45 UTC+2 schrieb Sean Landerkin:
Okay, do you understand how the data is written in C, and how in C I could convert the data to readable data? I think I understand how to transport that from C to python but I don't understand the initial translation from the written form to a human-readable form.

Dennis Lee Bieber

unread,
Apr 30, 2019, 12:13:36 PM4/30/19
to beagl...@googlegroups.com
On Mon, 29 Apr 2019 10:23:25 -0700 (PDT), Sean Landerkin
<derkin...@gmail.com> declaimed the
following:

>Thanks for the tip on text vs pictures, I will definitely take that in mind
>before I make new posts. That fixed my issue and I am now able to run the
>file, thank you so much for your help. One last question I have is how do I
>read the data that is written in the output.x file that is written by
>rb_file.py. Since its raw data I cant sudo nano the files and I am not
>quite sure how to convert it to something like a .txt or csv file that I
>can then parse through. Any suggestions on how I would do this?
>

Have you ever shown the contents of your rb_file.py ? (or, at least,
the section the does the file output, along with a sample of what the data
looks like /in the language -- not what you'd like to see later/

The best place to convert the data format for later use would be in
that file (change whatever write statements you are using).

The alternative would be to use another Python script to /read/ the
file and then convert the data to something usable. After all, if you wrote
it with Python, the equivalent type of read operation should handle it.


--
Wulfraed Dennis Lee Bieber AF6VN
wlf...@ix.netcom.com

TJF

unread,
Apr 30, 2019, 1:20:51 PM4/30/19
to BeagleBoard
Hi Dennis!


Am Dienstag, 30. April 2019 18:13:36 UTC+2 schrieb Dennis Lee Bieber:
Have you ever shown the contents of your rb_file.py ?

 
The best place to convert the data format for later use would be in
that file (change whatever write statements you are using).

Not a good idea. The example is about sampling a big number of ADC values at high speed, more than the RAM memory can hold. The ARM CPU is simply not fast enough for fetching, converting and writing the data at the same time.
 
        The alternative would be to use another Python script to /read/ the
file and then convert the data to something usable. After all, if you wrote
it with Python, the equivalent type of read operation should handle it.

That has been said already.

Regards

Dennis Lee Bieber

unread,
Apr 30, 2019, 5:13:32 PM4/30/19
to beagl...@googlegroups.com
On Tue, 30 Apr 2019 10:20:51 -0700 (PDT), TJF
<jeli.f...@gmail.com> declaimed the
following:


>
>Not a good idea. The example is about sampling a big number of ADC values
>at high speed, more than the RAM memory can hold. The ARM CPU is simply not
>fast enough for fetching, converting and writing the data at the same time.
>
You're already using Python -- a byte-code interpreted language. If
that's fast enough to write the data as is, it can probably handle
conversion at the same time. (especially as you have a 1millisecond
polling loop!)

However...

If I interpret that code, the output is just a sequence of 16bit
integers. Rudimentary post processing (Python 3.4+ to get the iterator)
{UNTESTED}


-=-=-=-=-
import struct

CHNK_SAMPLES = 64

rdr = struct.Struct("H") #unsigned short

fin = open("yourdata.file", "rb")

while True:
chnk = fin.read(2*CHNK_SAMPLES) #read a chunk of samples
if not chnk: break #exit on EOF
for smpl in rdr.iter_unpack(chnk): #unpack ONE sample
print(smpl[0]) #do something with it

fin.close()
-=-=-=-=-

Used the chunks mode to reduce actual I/O calls, otherwise one would be
doing a read operation for each sample. The smpl[0] is used as unpack()
returns a tuple of values (the format, here just "H", could specify
multiple "fields" of binary data as a "record").

TJF

unread,
May 1, 2019, 4:50:50 AM5/1/19
to BeagleBoard


Am Dienstag, 30. April 2019 23:13:32 UTC+2 schrieb Dennis Lee Bieber:
        You're already using Python -- a byte-code interpreted language. If
that's fast enough to write the data as is, it can probably handle
conversion at the same time.  (especially as you have a 1millisecond
polling loop!)

You're talking about up to 200,000 samples per second. Neither FreeBASIC, nor C, nor assembler code can write human readable numbers that fast. Yes, probably Python can.

Regards

TJF

unread,
May 1, 2019, 5:07:02 PM5/1/19
to BeagleBoard
@Sean Landerkin

I found a bug: the p1 pointer computation is wrong. Only the first and all odd chunks contain valid data. The even chunks contain garbage.

In order to get valid data you'll have to replace the line

p1 = cast(byref(p0, half), POINTER(c_ushort))

by

p1 = cast(byref(p0.contents, (half << 1)), POINTER(c_ushort))

(Computing simple pointers is pretty complicated in Python.)

Am Dienstag, 30. April 2019 03:20:45 UTC+2 schrieb Sean Landerkin:
Okay, do you understand how the data is written in C, and how in C I could convert the data to readable data? I think I understand how to transport that from C to python but I don't understand the initial translation from the written form to a human-readable form.

The code from Dennis works reading the data (needs python3). In order to scale to [mV] multiply the raw data (=samp[0]) by factor 1800/4095, or use 1.8/4095 for [V].

Regards

Sean Landerkin

unread,
May 15, 2019, 11:17:50 AM5/15/19
to BeagleBoard
Thank you @TJF and @Dennis for your help, I figured out everything I needed to and probably wouldn't have without your help! I modified the code suggested by Dennis to be able to loop over the multiple output.x files generated, and had it write the values to a .csv to be easily readable! Here is the code for anyone who wants to use it:


import struct
import csv
import array

#Note this file assumes you are using 7 outputs in rb_file.py and assumes that you have
#not changed the file naming convention. You must use command Python3 to run this file!


CHNK_SAMPLES = 64

rdr = struct.Struct("H")

write = open("data.csv" , "w") #Opens a csv file called data.csv in write mode
csvwriter = csv.writer(write, delimiter = ',', quotechar = '"', quoting = csv.QUOTE_MINIMAL) #sets up the csv writer where a , seperates data entries and '' is used for quotation
counter = 0 #sets up a counter used to iterate over the various data output files
fileName = "output.%u"  #follows the same naming format as used in rb_file.py
datarr = array.array('f',[0,0,0,0,0,0,0]) #sets up an array of length 7 that will contain floats
i = 0 #sets up the index for looping over our array
while True:
        try:
                fin = open(fileName % counter,"rb") #opens the output file corresponding to the counter value in read binary mode
                print("Now Reading: " + fileName % counter) #updates the user on the current file being read
                while True:
                        chnk = fin.read(2*CHNK_SAMPLES) #gathers data from the output file in chunks
                        if not chnk: break
                        for smpl in rdr.iter_unpack(chnk): #iterates over the values found in the chunk
                                datarr[i] = smpl[0]*1.8/4095 #converts the raw data into voltge form and stores it in the ith entry
                                if(i == 6): #if we have filled the array
                                        csvwriter.writerow([datarr[0],datarr[1],datarr[2],datarr[3],datarr[4],datarr[5],datarr[6]]) #writes the data values to the csv
                                        i = 0    #resets the index and the array values to 0
                                        datarr[0]=0
                                        datarr[1]=0
                                        datarr[2]=0
                                        datarr[3]=0
                                        datarr[4]=0
                                        datarr[5]=0
                                        datarr[6]=0
                                else:
                                        i= i+1
                counter = counter + 1 #raises the counter so we can move onto the next file
                fin.close() #closes the previous file
        except IOError: #this will trigger when you run out of data files to loop over
                print("No file called: " + fileName % counter + " found. This may be because you ran out of data.")
                write.close() #closes the csv file
                break

Any suggestions on modifications are welcome

Dennis Lee Bieber

unread,
May 15, 2019, 7:49:31 PM5/15/19
to beagl...@googlegroups.com
On Wed, 15 May 2019 08:17:50 -0700 (PDT), Sean Landerkin
<derkin...@gmail.com> declaimed the
following:

NOTE: if you KNOW the data will be in "records" of 7 values (and you
never have an "odd" record) you could...

>
>
>CHNK_SAMPLES = 64
>
Change that to 7 (it will mean more reads in the main loop, but...)

>rdr = struct.Struct("H")

Change to "HHHHHHH" -- will interpret all 7 values in one call to the
unpack and...


>i = 0 #sets up the index for looping over our array

Probably don't need this phase

> while True:
> chnk = fin.read(2*CHNK_SAMPLES) #gathers data from
>the output file in chunks
> if not chnk: break

With above changes, "chnk" should contain exactly 7 samples (14 bytes),
and
smpls = rdr.unpack(chnk)
should be a tuple or list (I haven't opened the help file to check) of all
seven.

> datarr[i] = smpl[0]*1.8/4095 #converts the
>raw data into voltge form and stores it in the ith entry


datarr = [smpl * 1.8 / 4095 for smpl in smpls]

should do the conversion (you probably don't need the array initializer
either since this flat out creates a regular list of values).

> if(i == 6): #if we have filled the array
>
And since we know it is 7 values at a time, no need to test...

>csvwriter.writerow([datarr[0],datarr[1],datarr[2],datarr[3],datarr[4],datarr[5],datarr[6]])

I think just

csvwrite.writerow(datarr)

would then suffice -- why index each element of the list, only to wrap them
back into a list.

>#writes the data values to the csv
> i = 0 #resets the index and the
>array values to 0
> datarr[0]=0
> datarr[1]=0
> datarr[2]=0
> datarr[3]=0
> datarr[4]=0
> datarr[5]=0
> datarr[6]=0
> else:
> i= i+1

... and not needed since the read/chunk and unpack is now working 7 values
at a time.
Reply all
Reply to author
Forward
0 new messages