As solution architects specialising in mobile visual micro-learning resources and digital solutions we recognise that every business is unique and so are the challenges you face. If you require assistance with an issue that is not listed on our site, get in touch for an obligation free consultation about how we can help.
Connecting people, valuable resources and critical information. Combine the power of secure QR codes, digital communication and reporting to mobilise any operational process to get the right information to the right person when they need it most.
I am having an issue resizing the Squarespace embed code box once I add a code snippet. There is a 'Preview in Safe' mode message and button that appears after code has been added (button doesn't function for me) and the box size can't be customized correctly, causing element overlaps and/or extra space below the box.
I have contacted Squarespace support, which gave me a laundry list of troubleshooting tips regarding my browser - which I dutifully did, to no avail. They told me that they couldn't duplicate the error, but they were using an empty embed code box to demonstrate, instead of an embed code box with code in it. It doesn't seem to matter the source of the code; I'm seeing the issue with both a Convertkit form code and an AddEvent button code.
Sizing embeds is tricky as the content doesn't load whilst editing! Best you can usually do is some trial and error, save refresh the page, then go back in to edit the page and tweak the code block size.
This is the code for the AddEvent calendar button. This was the best I could do, placement-wise. The 'preview in safe mode' error message takes up so much extra space that I have no control over. Squarespace support is basically treating me like I'm an idiot and gaslighting the problem altogether. I'm beginning to wonder why I encourage clients to use this platform.
Switching over to Fluid Engine has brought on it's share of issues; I'm sure those will be ironed out in the future. I suppose I should manage my expectations. I get frustrated when Squarespace support talks around errors and instead relies on designers/developers to do their work for them - - like YOU did today.
I have embedded a ConvertKit Form via a code block (Fluid Engine), however there appears to be a large area of extra space below the code block that I cannot get rid of (see attached screenshot). I have tried CSS to no avail and cannot drag the area/rows up any further than it currently is. Any ideas how I can remove this extra space below the embedded form?
Hello Squarespace community,
I would like to adjust the height of a code-block, it contains a shopify 'buy button' code. The goal is to move the section underneath up. (as shown with the white arrow in the image below).
The actual button is smaller than the code-block as you can see in the image below.
How can I reduce the code-block preview size in order to gain more freedom in my design?
I've already hit the minimal code-block size by dragging the edges. Is there something I can adjust in the code?
Website link: -endive-trh5.squarespace.com/
Thanks for any input you can provide,
Sergio
I'm having trouble in Fluid Engine using code and embed blocks. I get the message "Embedded Scripts" when I create a button with HTML, which makes the block bigger and affects padding/spacing with other elements.
The unsafe code is used for performance reasons, mostly. The basic idea is that you're going byte-by-byte over the image data, and flipping each byte manually (although there's more efficient and simple ways to handle the same thing).
The underlying image is handled by GDI+, which is unmanaged code. So when you're working with the image bytes directly, you have to manipulate unmanaged memory. How safe or unsafe this is is actually surprisingly tricky to determine - it depends a lot on how the unmanaged memory was originally allocated. Given that you're working from managed code, and you probably loaded the bitmap from a file or some stream, it's pretty likely it's not really unsafe, actually - there's no way for you to accidentally overwrite your managed memory, for example. The unsafe keyword's name doesn't come from being inherently dangerous - it comes from allowing you to do very unsafe things. For example, if you allocated the memory for the bitmap on your own managed stack, you could mess things up big time.
Overall, it's a good practice to only use unsafe code if you can actually prove that it's worth the costs. In image processing, this is quite often a good trade-off - you're working with tons of simple data, where overheads in e.g. bounds checking can be significant, even though it's quite easy to verify them only once, rather than in each iteration of the cycle.
If you wanted to get rid of this unsafe code, one way would be to allocate your own byte[] (managed), use Marshal.Copy to copy the image data to this byte[], do your modifications in the managed array, and then copy the results back using Marshal.Copy again. The thing is, this means allocating a byte[] as big as the original image, and then copying it twice (the bounds checking is negligible in this scenario - .NET JIT compiler will optimize it away). And in the end, it's still possible to make a mistake while using the Marshal.Copy that will give you the same issues you'd have with unsafe (not entirely, but that would be for a much longer talk).
For me, the by far most valuable part of having unsafe as a keyword is that it allows you to localize the unsafe stuff you're doing. While a typical unmanaged application is unsafe through and through, C# only allows you to be unsafe in specifically marked parts of the code. While those can still affect the rest of the code (which is one of the reasons you can only use unsafe in a FullTrust environment), it makes them much easier to debug and control. It's a trade-off, as always.
However, the code is actually unsafe in a very different manner - the UnlockBits call may never happen if there's an exception in the middle of the code. You should really use finally clauses to ensure properer cleanup of unmanaged resources.
And as a final note, you probably will not be doing image processing on the CPU anyway, if you want "real" performance, safe or unsafe. Today, it's often safe to assume that the computer you're running on has a GPU that can do the job faster, easier and with total isolation from the code actually running on the computer itself.
Suppose I'm reviewing code that job applicants send to prove their skills. Clearly I don't want to run executables they send. Not so clearly I'd rather not run the result of compilation of their code (just for example, Java allows to hide runnable code in comments).
I am pretty sure somewhere in the business there are some clever guys who have already created such a hack for a specific language and compiler version. My favorite place to look for something like this would probably be the International Obfuscated C contest - (do not know if there is something comparable for Java). However, in reality, how high do you consider the risk, assumed that
there are not many people in the world who actually know how to technically accomplish such a task (and googling alone won't give you a "quick ref" or tutorial on this, as you have already found out by yourself).
#2. is technically out of scope of the question because the question was about compiling code not running it (OTOH, there's the deep philosophical question: if type-checking a Haskell program can perform arbitrary Turing-computation, is that compilation or running a program?)
This leaves us with 3. Some compilers may limit the kind of access the compile time code has to the system, but for some of the use cases, having full access is unavoidable. The purpose of F#'s type providers, for example, is to "fake" synthetic types for data whose type system doesn't match F#'s, so that you can interact with, say, a web service that has a WSDL schema in a strongly-typed fashion. However, in order to do this, the type provider needs to have access to the WSDL schema resource either on the filesystem or on the web, so it needs to have filesystem and network access.
Be aware that building may be unsafe. For example in C# 'build event' allows you to specify arbitrary command lines to execute before and after building, which is obviously dangerous, and a lot easier to exploit than say buffer overflows in the compiler code.
Instead of speculating, I actually bothered to do some research on this topic before answering, going to the most authoritative resource I could think of (CVE Details). This comprehensive list of publicly disclosed security exploits is probably the best that one could do to assess the threat levels of various types of software.
I didn't take time to read all of the available material, of course, but I selected a few "primary" compilers, IDEs, and text editors to come up with a sample threat assessment. If you're serious about running any software at all, you should at least see what threats are out there. Also note that older software is generally buggier than newer software, so running the latest of whatever you're running is ideal.
First, we can take a look at various text editors. It seems the best editors are the most simple. Vi if you're using a Linux shell, or Notepad if you're in Windows. Something with no formatting capabilities, no parsing, just straight-forward viewing of data and automatic termination of parsing if a single character is outside of the current encoding scheme. Even Notepad++ has had a handful of vulnerabilities. Avoid anything complex when viewing untrusted files.
Second, we can look at IDEs. If you choose to open the file in an IDE, you should be aware that some IDEs have had reported bugs. Apparently Visual Studio has had exploits available through the extensions mechanism, so opening a solution might be problematic. Avoiding IDEs avoids an entire class of problems between you and the untrusted code. Sticking with VI seems a lot safer.