Iam using a two function implementation to calculate Mel-frequency coefficients for a neural network. My code is executing in it's entirety without any issues. However, after using the Energy Trace technology to estimate the average energy needed - I observed that much of the time required is for calculating these coefficients as they use floating point operations. A warning issued by the compiler is the ULP 5.2 as mentioned in the title. While I found some similar questions dealing with the warning, I have not found an answer that addresses the possible solution or method of moving operations to the RAM during runtime. I cannot change or let go of the floating point as it is integral to the correctness of the calculation. How can I implement the solution for this advice/warning issued by the MSP. I tried making some changes to the linker file namely moving my variables and pragma declaration on the FRAM and also assigning the '.run' as 'FRAM2' in the following section of the file:
However, I am still not certain where and what exact changes can be made to the code in order to improve the power consumption while using float operations in the MSP430. Please let me know if any additional information is required. Thank you.
IQ math would mean converting my values into a fixed point representation which I want to avoid - I specifically wanted to check if there is anything that can be done about the warning - Recommend moving to RAM during runtime
Looking at the device data sheet the advantage depends on the FRAM cache hit ratio. For typical hit ratios (a cache line is 4 words so sequential code will be 75%) executing from RAM takes about half the power. This may not be worth the effort.
The floating point library takes up a lot of space so not many devices would be able to hold it. In any case, telling the linker to put library code in RAM could be tricky at best. So you would want to select only those portions of code that get the most use.
Thanks for the detailed response David, this is really helpful - I wasn't sure how much of a difference it would make by moving these functions anyway but I think I can now decide better what steps I can take to possibly optimize the code.
you've guessed it, I'm receiving an annoying error and I'm currently at a loss how to solve this, other than starting a support ticket or completely rebuilding the report from scratch. However, I'd rather prefer just to solve the issue.
What I am trying to accomplish is (re-)publishing a report that I've migrated from a MySQL datasource to a SQL Server datasource. I've done this under a new name, so as far as the service is concerned, this is a completely new report, operating via a new Gateway. I'm writing republishing though, since I've changed an existing report, rather that starting completely blank.
The operation is timed out.. The exception was raised by the IDataReader interface. Please review the error message and provider documentation for further information and corrective action.
I'm receiving this error always when refreshing in the service (manually or scheduled), while refreshing in PBI works just fine. Also (not very surprising, reading the error message), while refreshes in PBI take at most one minute, it takes about 15 minutes in the service before it throws the error.
I had not thought about the gateway to be honest. I'm also not sure whether it could be the culprit. The gateway definitely has more than enough resources. It's a brand new machine, 20 core cpu, 128Gb RAM, .M2 SSD's, plenty of space (TB's) and a decent network connection. The SQL Database is installed on the same machine and is not used for other purposes. Other reports refresh just fine from the same gateway and also the error is consistent in its behavior. If it were a resource issue, I'd be expecting more erratic behavior. On top of that, it really is a very simple report, which is why it's puzzling me why specifically this report throws an error.
Specifically searching for the IDataReader, I have found that it is complaining about SSL certificates not being signed by a trusted entity. However, how this would translate to a solution is puzzling me. I find it peculiar that all the other reports do refresh just fine? They access the same resources with the same credentials?
I'm closing this case, since I cannot get what exactly is the issue. Imho it cannot be the datasource, since that is a SQL Server database, which works just fine for all other reports. There are a few unique tables not being used by other reports, but like I mentioned before, I've doublechecked all data types, explicitely checked for nulls / 0 / empty string values, trimmed and removed duplicates (not that there were any afaik) or anything else that might be conflicting in the existing relations.
As I mentioned before, after reuploading it finally refreshed just normal. Or so I thought... It turns out it was a one time thing only. As soon as the regular refresh schedule kicked in, it started failing again. I have a ton of other work to do too, so I left it for a few days, but today I thought I'd take another crack at it, mainly because it's bugging me that I didn't find the root cause of the error. After checking all the obvious stuff again to no avail, I decided to go though all the columns in every source query, to see what's going on there. It wouldn't be the first time I run into unexpexted problems due to a malaligned type.
So, checking all numerical columns for non-numerical values (beyond the first 1000 rows I mean), checking all dates, etc. Nothing special there, however, I did see that I have several large comment columns, which have a huge load of external (also manual) input from many different countries and languages that I use. I also found several uncommon (to me) symbols, that made me wonder whether the conversion from MySQL to SQL Server (where I use the Nvarchar data type (Unicode characters)), which MySQL does not support in that same manner, might have something to do with it.
So, long thread (and post) short: my advise, besides all the obvious tips that are already out there (checking data types and relations and so on), is to also clean all your custom input data if you didn't already. Clean as in: with the transform function.
My thought is that I can use the runtime of the algorithm to complete a certain task as an indirect measure of how the power consumption of that algorithm compares to another algorithm's power consumption. This would only make sense though, if the time spent in operation of the microcontroller was directly related to its power consumption. Is this assumption true?
There are many things that alter the power consumption of the device, but assuming your just computing an algorithm with all unused peripherals switching off, e.g. ADC (pulsing current each sample), GPIO (changing state consumes a small amount of current), watchdog circuit disabled (runs a clock and triggers an interrupt),
Then yes, the longer something takes to compute, the more power it has consumed, there is a trade off between clock speed and total power consumption (for most devices running at the highest clock speed for the shortest time uses less energy than a slower clock for a longer time) however again assuming that all stays the same, longer time = more power
If you want to start including other peripherals, towards the bottom of the datasheet for your device you will find chart after chart after chart outlining what the power consumption is for your particular use case, and various other relations, e.g. if you have the pin pullup left on, how long your switching that pin low increases consumption, if your driving something else in the signal, it will consume something in both high and low states, if your using ADC's, then the input buffers will have a non linear current draw depending on the input voltage. (usually you would disable them)
Kind of. What draws most current is the CPU clock and any active hardware peripherals such as GPIO. Hardware peripherals being a story of their own since each one has unique power consumption characteristics.
The efficiency of the algorithm, the "code efficiency" of the CPU and the hardware current consumption per tick all play part. Code efficiency in this case means how many CPU ticks it takes to execute a certain piece of higher layer program code (C code etc).
For example, some people argue that 8 bit MCUs should still be used because they draw less current than 32 bit ones. This tends to be true if you look at peak current consumption, but not necessarily so if you look at current consumption over time.
Take something like the C code my_uint32 = u32a + u32b;. The average 32 bit CPU will execute that line in a few assembler instructions, which perhaps means somewhere around 10-20 CPU ticks. An 8 bit MCU however, will need hundreds of assembler instructions in the form of software libs to execute the same code. Maybe 500-1000 CPU ticks, very roughly counted. So it could take the 8 bitter as much as around 100 times more execution speed/current consumption to run the very same code. And then it is suddenly irrelevant that the MCU draws less current per tick compared to the 32 bitter.
New nuclear power costs about 5 times more than onshore wind power per kWh. Nuclear takes 5 to 17 years longer between planning and operation and produces on average 23 times the emissions per unit electricity generated. In addition, it creates risk and cost associated with weapons proliferation, meltdown, mining lung cancer, and waste risks. Clean, renewables avoid all such risks.
One nuclear power plant takes on average about 14-1/2 years to build, from the planning phase all the way to operation. According to the World Health Organization, about 7.1 million people die from air pollution each year, with more than 90% of these deaths from energy-related combustion. So switching out our energy system to nuclear would result in about 93 million people dying, as we wait for all the new nuclear plants to be built in the all-nuclear scenario.
3a8082e126