A General Runtime Error Occurred. Cannot Generate Key

1 view
Skip to first unread message

Taneka Tarring

unread,
Aug 5, 2024, 1:54:29 PM8/5/24
to blacassweaton
WhatI am basically facing is a function which runs and collects some information about struct definitions and I need to create the struct definitions and parse methods inside it, so that those become available inside the module.

Your getindex method achieves essentially the same result, but it will be slower. And your stream method looks a lot like a constructor, so why not have an actual constructor that sets those values? Or use @kwdef to generate that constructor for you?


I am working on an I/O library (for the CERN ROOT format) where the data is defined inside the files themselves and the parser logic is based on some specific rules. I reached a point where I am able to read data with hardcoded types and parsers and now I am trying to simplify the code and create the logic during runtime. What I was basically struggling with is solved by @eval: defining structs in the global scope from variables which are local to the function (actually read from the file) which actually defines them.


The problem with Base.@kwdef is that I need to create different versions for a given struct. In my original example you see that there are different versions of Foo, which are defined in the file, with different field names and types. What I desperately need is some kind of a system which gives me meaningful errors when a type (version) is not present or implemented.


After that is present, I can go ahead providing stream!(::Foo23) etc. which create instances of Foo with different fields and types with a sometimes really complicated logic. Field lengths and types are depending on previous fields values etc.


They are technically not the same type but represent the same thing. The actual differences are usually quite subtle, like one more field added or the way one field is set from I/O data is different etc.


During runtime for example, I encounter that the data container holds data of type Foo with version 5. Now I need to find the appropriate parser which implements how I read the fields, so I thought it was a good idea to dispatch on the value type Foo5.


the parser logic is based on some specific rules. I reached a point where I am able to read data with hardcoded types and parsers and now I am trying to simplify the code and create the logic during runtime


I know nothing about the ROOT format, or how complex the type/parser descriptor language is, but I am wondering if possible combinations can be encoded in a type parameter, then the code implemented using @generated functions. Technically it would be compile time, but work as runtime.


Yes I thought about using @generated functions but there is quite a lot going on during parsing i.e. I am not sure if I hit the wall at some point and cannot do everything since generated has a small subset of features available.


Now I am really stuck with the actual parser function definition during runtime. If you want to get a rough idea how the dynamic parser generation is done in Python: uproot3/rootio.py at eb2ae1ffe6fb2c2ce8cb7cbdc0919d5b51c0ff0f scikit-hep/uproot3 GitHub but I warn you


It consists of three sub structures: TObject, fName and fTitle, and TObject has two fields called fBits and fUniqueID (this information is in a similar structure which I also read using bootstrapped types):


When you update your function, Lambda deploys the change by launching new instances of the function with the updated code or settings. Deployment errors prevent the new version from being used and can arise from issues with your deployment package, code, permissions, or tools.


When you deploy updates to your function directly with the Lambda API or with a client such as the AWS CLI, you can see errors from Lambda directly in the output. If you use services like AWS CloudFormation, AWS CodeDeploy, or AWS CodePipeline, look for the response from Lambda in the logs or event stream for that service.


The following topics provide troubleshooting advice for errors and issues that you might encounter when using the Lambda API, console, or tools. If you find an issue that is not listed here, you can use the Feedback button on this page to report it.


The Lambda runtime needs permission to read the files in your deployment package. In Linux permissions octal notation, Lambda needs 644 permissions for non-executable files (rw-r--r--) and 755 permissions (rwxr-xr-x) for directories and executable files.


In Linux and MacOS, use the chmod command to change file permissions on files and directories in your deployment package. For example, to give an executable file the correct permissions, run the following command.


When you upload a deployment package or layer archive directly to Lambda, the size of the ZIP file is limited to 50 MB. To upload a larger file, store it in Amazon S3 and use the S3Bucket and S3Key parameters.


When you upload a file directly with the AWS CLI, AWS SDK, or otherwise, the binary ZIP file is converted to base64, which increases its size by about 30%. To allow for this, and the size of other parameters in the request, the actual request size limit that Lambda applies is larger. Due to this, the 50 MB limit is approximate.


When you upload a function's deployment package from an Amazon S3 bucket, the bucket must be in the same Region as the function. This issue can occur when you specify an Amazon S3 object in a call to UpdateFunctionCode, or use the package and deploy commands in the AWS CLI or AWS SAM CLI. Create a deployment artifact bucket for each Region where you develop applications.


The name of the handler method in your function's handler configuration doesn't match your code. Each runtime defines a naming convention for handlers, such as filename.methodname. The handler is the method in your function's code that the runtime runs when your function is invoked.


When you configure a Lambda function with a layer, Lambda merges the layer with your function code. If this process fails to complete, Lambda returns this error. If you encounter this error, take the following steps:


Error: InvalidParameterValueException: Lambda was unable to configure your environment variables because the environment variables you have provided exceeded the 4KB limit. String measured: {"A1":"uSFeY5cyPiPn7AtnX5BsM...


The maximum size of the variables object that is stored in the function's configuration must not exceed 4096 bytes. This includes key names, values, quotes, commas, and brackets. The total size of the HTTP request body is also limited.


In this example, the object is 39 characters and takes up 39 bytes when it's stored (without white space) as the string "BUCKET":"DOC-EXAMPLE-BUCKET","KEY":"file.txt". Standard ASCII characters in environment variable values use one byte each. Extended ASCII and Unicode characters can use between 2 bytes and 4 bytes per character.


Error: InvalidParameterValueException: Lambda was unable to configure your environment variables because the environment variables you have provided contains reserved keys that are currently not supported for modification.


Lambda reserves some environment variable keys for internal use. For example, AWS_REGION is used by the runtime to determine the current Region and cannot be overridden. Other variables, like PATH, are used by the runtime but can be extended in your function configuration. For a full list, see Defined runtime environment variables.


These errors occur when you exceed the concurrency or memory quotas for your account. New AWS accounts have reduced concurrency and memory quotas. To resolve errors related to concurrency, you can request a quota increase. You cannot request memory quota increases.


Concurrency: You might get an error if you try to create a function using reserved or provisioned concurrency, or if your per-function concurrency request (PutFunctionConcurrency) exceeds your account's concurrency quota.


I am new to nsys so just start with the classic profile to test my two-gpu distributed training model. I wrote a simple python file with part of my calculation, the report is successfully created but when I move to the general model to train, there is an importation problem.


Hi hwilper, I am using the newest version 2024.1. I have used both CLI and GUI actually, both of them report the same error. The command line is to run a training script nsys profile --stats=true ./Train_Ranks_GPU_01.sh

Actually, I think I found a trick to get rid of this error. I manually stopped my training model and the report would be generated successfully. I have no idea why this works but hope this could help someone for future use.


To run QdstrmImporter on the target system, copy the Linux Host-x86_64 directory to the target Linux system or install Nsight Systems for Linux host directly on the target. The Windows or macOS host QdstrmImporter will not work on a Linux Target. See options below.


Hello, what is the output of nvidia-smi command on the target system? We had a recent bug in CUPTI which caused this kind of out-of-order error. It is fixed in 2024.2 version of nsys. Could you try the 2024.2 version of nsys, please?


The built-in exception classes can be subclassed to define new exceptions;programmers are encouraged to derive new exceptions from the Exceptionclass or one of its subclasses, and not from BaseException. Moreinformation on defining exceptions is available in the Python Tutorial underUser-defined Exceptions.


The expression following from must be an exception or None. Itwill be set as __cause__ on the raised exception. Setting__cause__ also implicitly sets the __suppress_context__attribute to True, so that using raise new_exc from Noneeffectively replaces the old exception with the new one for displaypurposes (e.g. converting KeyError to AttributeError), whileleaving the old exception available in __context__ for introspectionwhen debugging.


The default traceback display code shows these chained exceptions inaddition to the traceback for the exception itself. An explicitly chainedexception in __cause__ is always shown when present. An implicitlychained exception in __context__ is shown only if __cause__is None and __suppress_context__ is false.

3a8082e126
Reply all
Reply to author
Forward
0 new messages