The flash programmer for you device is integrated into CCS so the only thing you need to do is to have the linker allocate your code to the flash region of your device. This is often done with the linker command file. I would suggest taking a look at the default flash linker command file for your device (.\ccsv7\ccs_base\c2000\include\2837x_FLASH_lnk_cpu1.cmd). Note how it places most of the code in flash (.text section). Once you build an application that is configured to load/run from flash, just load it in CCS like you were loading it before. CCS figures out from the generated executable where to load/flash the various sections of code.
I found a Flash program online that would be perfect for a project I'm working on. Obviously with web content you can usually just use your browser to view the source code of whatever HTML, PHP, etc that you may be looking at. But when it comes to Flash it seems we're left in the dark in terms of viewing source code.
What do you need the source code for? If it's for a particular effect (some 3D voodoo you have no clue about), ask the author how it's done instead of trying to reverse-engineer yourself. That'd be easier, anyway.
It is actually possible for Flex SWFs to be compiled with the source code bundled inside, although this is at the discretion of their author and is typically only done for tutorials or didactic examples (as it increases SWF size slightly). Right-click on a SWF in a web page and if the "View Source Code" option appears, then you're in luck.
1) Could you check if there is an ECC error? Gel file disables ECC-evaluation and hence you may not notice the error when the debugger is connected (assuming the ECC error happened before the flash initialization routine).
C28xx_CPU1: GEL Output: Memory Map Initialization CompleteC28xx_CPU1: GEL Output: ... DCSM Initialization Start ... C28xx_CPU1: GEL Output: ... DCSM Initialization Done ...C28xx_CPU1: GEL Output: CM is out of reset and configured to wait boot. (If you connected previously, may have to resume CM to reach wait boot loop.)C28xx_CPU1: If erase/program (E/P) operation is being done on one core, the other core should not execute from shared-RAM (SR) as they are used for the E/P code. User code execution from SR could commence after both flash banks are programmed.C28xx_CPU1: Only CPU1 on-chip Flash Plugin can configure clock for CPU1, CPU2 and CM Flash operations. Plugin automatically configures PLL when CPU1 Flash operations are invoked. However, if users want to do only CPU2 or CM Flash operations without doing a prior CPU1 operation in the current session, they should click on 'Configure Clock' button in CPU1's on-chip Flash Plugin before invoking CPU2 and CM Flash operations. When this button is used, Flash Plugin will configure the clock for CPU1/CPU2 at 190MHz and CM at 95MHz using INTOSC2 as the clock source. Plugin will leave PLL config like this and user application should configure the PLL as required by application.C28xx_CPU1: GEL Output: ... DCSM Initialization Start ... C28xx_CPU1: GEL Output: ... DCSM Initialization Done ...C28xx_CPU1: GEL Output: CM is out of reset and configured to wait boot. (If you connected previously, may have to resume CM to reach wait boot loop.)C28xx_CPU1: GEL Output: ... DCSM Initialization Start ... C28xx_CPU1: GEL Output: ... DCSM Initialization Done ...C28xx_CPU1: GEL Output: CM is out of reset and configured to wait boot. (If you connected previously, may have to resume CM to reach wait boot loop.)
From what I see the problem is not with the XRSn going too soon or too late, the supervisors are doing its job perfectly, and because the 3V3 rail is below 3.17V (datasheet threshold) XRSn must be held low.
The longer reset is caused by slower 3V3 ramp.
The question is: why the 3V3 ramp slows down during power up?
Regards,
Andy
Hi Andy, you are right, it is hard to tell why when it does not boot in flash the 3v3 goes straight to 3.3V VS when it does boot in flash, there is a slow ramp. Maybe it is because it is booting to flash and initialize all the GPIO and stuff?
As Andy was saying, the two supervisors do their job, the 3V3 slope is different and make the XRSn low longer when it boot vs when it does not, I am not sure if it is a cause or the reason why it boots to flash .
I will double check that, having two buck converters chained was not the best idea. My rational behind that was to use the first one to get a big range of Vin 6-60V and the other one to get the 3V3 and 1V2 is copy/paste of the control card to avoid any design mistake
I haven't seen too many complications do to that. I would recommend you to verify the loading of the various voltage regulators that you have, many have capacitive limits or requirements to achieve stable regulation(and maybe in your case reliable startup/ shutdown)
I recently started learning assembly and came to know about linker scripts and other low-level details of hardware programming. I am also teaching myself computer architecture and somewhere along the line I came to fear that my picture of the memory model might have been wrong all along.
According to what I understand currently, all the code and data resides on the non-volatile memory just after we 'burn' the binary onto a processor - the RAM being volatile contains nothing upon reset. When the program begins 'executing' it does so from the address 0x0000 which is almost always (AFAIK) the lowest address in Flash. So, instructions are latched onto the bus connecting Flash to the CPU core and that is where the actual execution takes place. However, when we talk about the CPU retrieving or storing data from the memory, we're usually talking about RAM - I am aware that we can read/write data from the program memory as well (I've seen this done on AVRs) but is it not as common? Is it because RAM is faster than ROM that we prefer to store data there?
Does this mean that the start-up runtime code (which itself executes from Flash) has to copy all the program opcodes from Flash to RAM and somehow maps the addresses in Flash to point to RAM so that the CPU fetches opcodes from there? Is it similar to the process in which we move the .data sections from ROM to RAM on startup?
I can imagine this to be simpler in von Neumann architectures where the program and data memories share a bus but in Harvard architectures wouldn't this mean that all the code and data have to pass through the CPU registers first?
Once your device gets faster then the situation is a little different. Midrange ARM systems may do that as well, or they may have a mask ROM bootloader that does something smarter: perhaps downloading code from USB or external EEPROMs into internal SRAM.
Larger, faster systems will have external DRAM and external Flash. This is typical of a mobile phone architecture. At this point, there is plenty of RAM available and it's faster than the Flash, so the bootloader will copy and execute it. This may involve shovelling it through the CPU registers or it may involve a DMA transfer if a DMA unit is available.
Harvard architectures are typically small so don't bother with the copying phase. I've seen an ARM with "hybrid harvard", which is a single address space containing various memories but two different fetch units. Code and data can be fetched in parallel, as long as they are not from the same memory. So you could fetch code from Flash and data from SRAM, or code from SRAM and data from DRAM etc.
RAM is generally faster than flash, but it doesn't really matter until you're hitting clock speeds in excess of 80-100MHz or so - as long as the flash access time is faster than the time it takes to run an instruction, it shouldn't matter.
The physical construction of RAM allows us to build very fast devices; much faster than flash. At this point, it makes sense to copy blocks of code into RAM before execution. This also brings additional benefits to the developer, such as being able to modify code at runtime.
Not necessarily. This is where virtual addressing comes in. Instead of program code referring to the raw hardware RAM addresses, it actually references a virtual address space. Blocks of virtual address space are mapped over to physical memory devices, which may be RAM, ROM, flash, or even device buffers.
For example, when you reference address 0x000f0004 on a micro, you might be reading address 0x0004 from the flash. The virtual address is 0x000f0004, but the physical address is just 0x0004 - the entire 0x000fxxxx address space is mapped to a 4KB physical memory device. This is just an example, of course, and the method of managing and organising virtual address space vastly differs across architectures.
As such, when you say that "the program begins executing [...] from the address 0x0000 which is almost always the lowest address in flash", you're not guaranteed to be correct. In fact, many microcontrollers start at 0x1000.
Your operating system running on the general purpose computer fetches code from the H.D.D and stores it on the RAM for faster access. If your processor try to fetch directly from the HDD on ongoing basis then operations would be much slower due to speed mismatch between two. So your RAM comes into play where piece of your repetitive code is stored for faster access. And that too even further is made available on the processors cache memory to make it even more faster.
Now when you are working on micro controller it totally depends on you where you locate your data on the chip. If the data is static you might want to locate it on code memory which will save your RAM which is comparatively much smaller than Code memory. In C language when you initialize datatype using static or in some compiler const prefix data will be stored on the code memory or else will be stored in the RAM. And in the assembly you directly use DB (Define Byte in case of Basic 8051) to initialize data on the particular location. Now even in some controllers like PIC ARM you can write ROM in the run time but fetching data will take much time.
c80f0f1006