As you know, WSUS use BITS for downloading its updates. Unfortunately BITS downloads these updates in background and use remained bandwidth of server. This is not desirable in my situation that want to get all update rapidly. I want to force BITS to download WSUS updates in foreground (instead of background). Is there any chance?
I read that the order of bit fields within a struct is platform specific. What about if I use different compiler-specific packing options, will this guarantee data is stored in the proper order as they are written? For example:
On an Intel processor with the GCC compiler, the fields were laid out in memory as they are shown. Message.version was the first 3 bits in the buffer, and Message.type followed. If I find equivalent struct packing options for various compilers, will this be cross-platform?
No, it will not be fully-portable. Packing options for structs are extensions, and are themselves not fully portable. In addition to that, C99 6.7.2.1, paragraph 10 says: "The order of allocation of bit-fields within a unit (high-order to low-order or low-order to high-order) is implementation-defined."
K&R says "Adjacent [bit-]field members of structures are packed into implementation-dependent storage units in an implementation-dependent direction. When a field following another field will not fit ... it may be split between units or the unit may be padded. An unnamed field of width 0 forces this padding..."
Bitfields should be avoided - they aren't very portable between compilers even for the same platform. from the C99 standard 6.7.2.1/10 - "Structure and union specifiers" (there's similar wording in the C90 standard):
An implementation may allocate any addressable storage unit large enough to hold a bitfield. If enough space remains, a bit-field that immediately follows another bit-field in a structure shall be packed into adjacent bits of the same unit. If insufficient space remains, whether a bit-field that does not fit is put into the next unit or overlaps adjacent units is implementation-defined. The order of allocation of bit-fields within a unit (high-order to low-order or low-order to high-order) is implementation-defined. The alignment of the addressable storage unit is unspecified.
You cannot guarantee whether a bit field will 'span' an int boundary or not and you can't specify whether a bitfield starts at the low-end of the int or the high end of the int (this is independant of whether the processor is big-endian or little-endian).
endianness are talking about byte orders not bit orders. Nowadays , it is 99% sure that bit orders are fixed. However, when using bitfields, endianness should be taken in count. See the example below.
If you really, really need to have identical binary information, you'll need to create bitfields with bitmasks - e.g. you use an unsigned short (16 bit) for Message, and then make things like versionMask = 0xE000 to represent the three topmost bits.
There's a similar problem with alignment within structs. For instance, Sparc, PowerPC, and 680x0 CPUs are all big-endian, and the common default for Sparc and PowerPC compilers is to align struct members on 4-byte boundaries. However, one compiler I used for 680x0 only aligned on 2-byte boundaries - and there was no option to change the alignment!
This was a problem with one project I worked on, because a server process running on Sparc would query a client and find out it was big-endian, and assume it could just squirt binary structs out on the network and the client could cope. And that worked fine on PowerPC clients, and crashed big-time on 680x0 clients. I didn't write the code, and it took quite a while to find the problem. But it was easy to fix once I did.
Of course the best answer is to use a class which reads/writes bit fields as a stream. Using the C bit field structure is just not guaranteed. Not to mention it is considered unprofessional/lazy/stupid to use this in real world coding.
When processing my digital inputs and outputs I have always used bits of an INT in parallel with the IO address to allow for forcing and simulation. This works well for interface with a HMI face plate with 16 buttons and indicators on it each driven by a different bit.
If you are using M-bit memory, you _can_ use the absolute address for the UDT tag and allow the HMI to multiplex the 16-bits from the MWnnn address. That gives you symbolic addressing in the PLC and HMI tag reduction.
Hi BobB thanks for your answer. i did this before but our maintenance team members forgot again force bits. I'm looking for the option to cancel with command. We also use Omron NS8 HMI which connect via ethernet. Maybe i can use NS8 HMI for cancel force bits.
@xcanowarx, that is what I meant by this statement in my original post. The state of the bit will remain in the same state that it was forced to unless the program changes it or it is an input point.
There is not really a way around this. Unless you know what bit is forced. if you know what bit is forced, you can force it off using a 2301 command to both force the bit off and then cancel the force.
I was actually wondering... How long would it take to crack/brute force a 32 bit key/encryption and a 16 bit key/encryptions respectively on a 4GHZ and a 2GHZ PC. I know that a 32 bit integer has 4,294,967,296 combinations while a 16 bit number has exactly 65,536 combinations but I don't know how long It'll take. and what processor would be able to brute force this in the shortest period of time
A GPU will be even faster. It's recommended to brute force the entire 2^32 space for testing various numerical functions, it's fast and catches all the edge cases you might not have thought about. Brute forcing 2^64 values takes a month or so on a fast GPU, easily doable faster on a GPU cluster.
Edit, thanks to kelalaka: a Tesla V100 can run about $2^47$ SHA-1 hashes/hour. SO $720hours/month\times2^47Hashes/hour\approx2^56Hash/month$. That's about 182 months total, but each hash is substantially more work than just incrementing an integer, so a faster operation can likely be brute-forced in a month on such a GPU. Slower operations need a cluster, but such clusters can be rented from various cloud providers.
Data taken from the surface lacks the fidelity to give operators a true understanding of the downhole environment. Cerebro Force reduces the uncertainty and gives more accurate information on downhole torque, drag, and mechanical specific energy.
Provides operators with unmatched data capture capability on drilling forces experienced at the bit, delivering a clearer picture of the downhole drilling environment. Operators now have access to the most important drilling performance measurements, captured from within the drill bit which is the most critical location within the bottomhole assembly.
Tech. Sgt. Christopher Saunders, 736th Aircraft Maintenance Squadron home station check dock chief, drills in a screw on a C-17 Globemaster III at Dover Air Force Base, Delaware, Sept. 29, 2023. Saunders, a former Bedrock Innovation Lab intern, created an updated drill bit case that includes extra storage compartments for bits, embedded magnets and stronger materials. These new features increased efficiency and durability and will, over time, decrease cost. (U.S. Air Force photo by Senior Airman Cydney Lee)
Tech. Sgt. Christopher Saunders, 736th Aircraft Maintenance Squadron home station check dock chief, holds a 3D-printed drill bit case at Dover Air Force Base, Delaware, Sept. 29, 2023. Saunders, a former Bedrock Innovation Lab intern, updated the drill bit case to include extra storage compartments for bits, embedded magnets and stronger materials. These new features increased efficiency and durability and will, over time, decrease cost. (U.S. Air Force photo by Senior Airman Cydney Lee)
b37509886e