Hi
Today I ran into this error in gateway log list. It seems internal Sqlite is crash.
The list is full with 260k log entries and log list flashing between empty list and following image every 10 sec.
Is there any way to limit log entries or delete it?
I use v8.0.6.
fe75f4a-bababf26-63-019201080 211 KB
Here is one of log file:
=drivesdk
Share1036814 141 KB
When I delete all of files inLogs folder it will automatically create by system with the same size after each reset.
Is there any way to delete the all old logs file?
SQLite (most of the time) only allows one connection, because it's just a DB file and has to maintain locking guarantees. Are you doing anything in scripting that might be interacting with the logging database? Running any other program on your computer that might be connecting to these files?
I'm not sure what 'system' means in this case; Ignition, or your actual OS? The OS definitely shouldn't be recreating these files, although I notice your screenshot is Raspbian or similar; did the log files somehow make it into a disk image or backup that's being restored? If you shut down the gateway, delete /usr/local/ignition/logs/, and then restart the gateway, you should only get the system_logs.idb and a single wrapper.log file until it rolls over due to the settings in ignition.conf. If you're getting other behavior (such as the files being recreated without the gateway running), it's something else on your system.
There's zero code in Ignition that would restore old log files. Either they're not actually getting deleted, or something else about the OS is restoring them. The logging infrastructure around the wrapper.log files isn't even part of Ignition; it's part of the Tanuki service wrapper.
Based on the log output, it looks like you're hitting log file cleanup based on exceeding the configured size (which defaults to 100MB). The cleanup looks to be taking over 5 seconds and seems to be long enough to block a request from the web UI (for viewing the logs), resulting in that error. If it doesn't clear up on its own (which it very well might, as it continues to prune (default of 500 events per cycle if due to disk space exceeded)), you can look to the tuning parameters in data/logback.xml, specifically diskspaceCleanupEventCount (defaults to 500) and maxDatabaseSize (defaults to 104857600 bytes). Take note that the maintenance elements in that XML file are commented out by default.
It turns out that the database size indeed was big based on settings and cleaning up that database took time as at the same time viewing the log page causes this issue. Now when the log file is cleaned it seems to be fine at the moment.
Hi all,I use the following code in execute sql task. I set the result set to single row. Input parameter data type is varchar (8000). Result set is saved in a variable with data type varchar(8000). When I run the package I get an error.////[Execute SQL Task] Error: Executing the query " -- SET NOCOUNT ON added to prevent extra result ..." failed with the following error: "An error occurred while extracting the result into a variable of type (DBTYPE_STR)". Possible failure reasons: Pro/////*Can anyone tell me how to fix this please? I think the way I use parameters in my code is not correct. (FROM (SELECT [object_id] = OBJECT_ID( ? , 'U')) o )This is the hardcoded one. ( part of the query). When I run the hardcoded query in management studio, I don't get any error.
The error message you're receiving suggests that there is an issue extracting the result of your query into a variable of type DBTYPE_STR in your Execute SQL Task. The issue may be related to the way you are using parameters in your query.
One thing to check is whether the parameter you are passing in is the correct data type and size. You mentioned that your input parameter has a data type of varchar(8000), but it would be helpful to double-check that this matches the data type of the parameter in your query. Additionally, if the length of the input parameter is shorter than 8000 characters, you may want to consider using a smaller data type, such as varchar(255).
Another potential issue could be related to the way you are referencing the @SourceTable variable in your query. It looks like you are using dynamic SQL to create a table based on the table name stored in the @SourceTable variable. In your full query, you are using a parameter (?) to pass in the table name, but in your hardcoded query, you are not using a parameter. If the issue persists, you may want to try using a hardcoded table name in your full query to see if that resolves the issue.
Lastly, it's possible that there is an issue with the result set being returned by your query. You mentioned that you set the result set to a single row and saved it in a variable of type varchar(8000). If the result set contains more than one row or if the size of the result set exceeds the maximum length of the variable, this could cause an issue. You may want to check the size of the result set and consider using a larger variable if necessary.
Overall, without more information about your query and package setup, it's difficult to provide a definitive solution. I would recommend double-checking your parameter data types, trying a hardcoded table name in your full query, and verifying the size of your result set. Additionally, if you are still experiencing issues, you may want to look into logging or debugging tools to help you identify the root cause of the issue.
Want to collaborate on code errors? Have bugs you need feedback on? Looking for an extra set of eyes on your latest project? Get support with fellow developers, designers, and programmers of all backgrounds and skill levels here with the Treehouse Community! While you're at it, check out some resources Treehouse students have shared here.
Looks like the name of the table is different. I'm getting the same error. When clicking on the little grid box, next to the tool icon and information icons (under Schemas), you can see that the table is actually displayed. The code on it says:SELECT * FROM treehouse_movie_db.movies;
"I don't know if you're in the exact same situation as me, but I was working on the project right after I installed the workbench. I restarted my computer and run the statement again, I only got an error that's fixed simply double clicking the database name "treehouse_movie_db" on the sidebar. Right now I am almost at the end of the course and I haven't had any more issues :)
Got the same issue: "08:47:09 select * from movies LIMIT 0, 1000 Error: Error formatting SQL query: empty string given as argument for ! character" but having read these coments, i double clicked the movies db and it immediately ran fine.. probably need to explain in the script about selecting which db to work on or something? Sorry - complete beginner.
I got the same error also...I used to see this in SQL Server Management Studio too (but there was a drop-down box there so you could easily see which database you were selecting from). You have to select the database first before selecting from it, which as Peter says you can do my double-clicking it on the left-hand side under SCHEMAS. Thanks for that, Peter!
I got the same error also, and found my answer here - needing to double-click the database name in order to select it prior to running the query. This is not all that dissimilar to phpmyadmin, for those who are familiar, in that you need to select a specific database before importing, exporting, or running a query.
I'm not sure if it's still done the same way in CF2018, but I know that in CF10 and CF11, the "line number" isn't always the correct line number when a query is involved. It's usually (but not always) the last line of the query, itself. So, in a nutshell, line #134 most likely isn't the issue. Non-numeric character where a numeric character was expected makes me think that something is in the value that shouldn't be.
I have installed cf2018 fresh on a windows 2016 server for test and prod environments. The same setup works without any errors on Test environment and on Prod environment I get the errors I have mentioned. So nothing really has changed and its not a cf update its a fresh installation of cf2018.
The only difference as I can see is I had to use JDBC driver connection string to create the datasource in Admin portal whereas in thye test environment I could directly select the Oracle option from the drop down list so did not have to use JDBC connection string in test, do you think this could be the reason ?
Ok , I think I have noticed a trend in the error, in the Date from field if I change the time from 00:00 to anything else it does not throw any error. What does this mean, how can I fix the code to include time part of 00:00 as well.
Hi BKBK, This was originally how the issue started but further work on this does not seem to give the Time part separately, for now I am unable to understand why it is not working for a time format 00:00:00 it works for any other time format other than this.
We are facing a General Database Error [9008], on different computers while searching.
We are using the Laserfiche version 11, and Database is SQL Server (SQL 2019 CU 14 Standard).
The search criteria is Field Search and Field is EmpID. The Template Fields are already set to indexed.
Additionally, Whenever we reset the Client settings, it works and search the data for the first 5-6 queries later on for every query it start giving the Error [9008]. The Detail error is as follow:
I came across this thread after having the same issue for a client of ours. I did an in-place upgrade for them from 10.4.3 to 11 and users started experiencing the general database error. After reading this thread I was reluctant to enable the LCE, so I asked some colleagues if they have enabled it before. Luckily enough, they ran into this issue and the following solved it for them and for my client. So the credit goes to them.
4a15465005