Logging Application Events

128 views
Skip to first unread message

Jeremy Bante

unread,
Nov 25, 2013, 11:55:40 PM11/25/13
to fmsta...@googlegroups.com
Dan Smith recently started a proposal page for a conventional way for a FileMaker application event log to work. I think the idea could stand to be fleshed out a little more in this forum. So what logs have folks built for themselves or seen out in the wild? What worked well and what didn't?

One fruitful place to start might be to survey the functionality of more solidly standardized logging practices in other technologies, such as the Syslog that Dan linked to.. There's a lot of logging experience out there for us to benefit from. Further, if whatever we come up with is at least in some way intercompatible with, if not simply a FileMaker implementation of, some other logging standard we like, the FileMaker logging starts life with some of the credibility of the existing standard.

Daniel Smith

unread,
Nov 26, 2013, 1:31:19 AM11/26/13
to filemakerstandards.org
Thanks for starting this thread Jeremy.

As far as I know, most established logging systems utilize a text file where each line represents a log entry and each piece of data in a log entry is delimited in some way (spaces, tab, comma, etc.). This is fundamentally different that what we are dealing with in FileMaker, especially if it's going to accommodate miscellaneous name/value pairs, as those created by my Error* and LogData custom functions. Does anybody know of another logging system that uses a key/value pair storage method?


On Mon, Nov 25, 2013 at 9:55 PM, Jeremy Bante <jeremy...@gmail.com> wrote:
Dan Smith recently started a proposal page for a conventional way for a FileMaker application event log to work. I think the idea could stand to be fleshed out a little more in this forum. So what logs have folks built for themselves or seen out in the wild? What worked well and what didn't?

One fruitful place to start might be to survey the functionality of more solidly standardized logging practices in other technologies, such as the Syslog that Dan linked to.. There's a lot of logging experience out there for us to benefit from. Further, if whatever we come up with is at least in some way intercompatible with, if not simply a FileMaker implementation of, some other logging standard we like, the FileMaker logging starts life with some of the credibility of the existing standard.

--
You received this message because you are subscribed to the Google Groups "fmstandards" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fmstandards...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Jeremy Bante

unread,
Nov 26, 2013, 11:24:25 AM11/26/13
to fmsta...@googlegroups.com
On Tuesday, November 26, 2013 1:31:19 AM UTC-5, dansmith65 wrote:
As far as I know, most established logging systems utilize a text file where each line represents a log entry and each piece of data in a log entry is delimited in some way (spaces, tab, comma, etc.). This is fundamentally different that what we are dealing with in FileMaker, especially if it's going to accommodate miscellaneous name/value pairs, as those created by my Error* and LogData custom functions. Does anybody know of another logging system that uses a key/value pair storage method?

We're not dealing with anything in FileMaker yet. Why not start with a model inspired by text file-based log systems? Their longevity says good things about them. At the very least a text file log format represents a likely target export format so that a log generated by a FileMaker application might be analyzed by the same tools other folks seem to be so fond of. That doesn't mean our storage mechanism within FileMaker has to exactly match one possible export format.

And who said anything about a dictionary storage method? I'm sure it could be fine, but I'm presuming this conversation is starting from scratch. I'm interested to see the argument for it re-articulated.

Daniel Smith

unread,
Nov 26, 2013, 2:22:11 PM11/26/13
to filemakerstandards.org
We're not dealing with anything in FileMaker yet.
...

And who said anything about a dictionary storage method? I'm sure it could be fine, but I'm presuming this conversation is starting from scratch. I'm interested to see the argument for it re-articulated.

Unless someone speaks up asap over on the error source management page, then I disagree. We've already created our error handling system to generate a miscellaneous set of name/value pairs. An error is a common item that will be logged, so our logging system should be able to accommodate that source. All I'm trying to get at here is that we may be starting from scratch with logging, but we have to at least consider the standard we just created for error source management.

How I've defined the "Logger" and "Log Writter" relationship in the sample file allows for flexibility of the output format. So, in the sample file, I created a "Log Writter: FM" module to log the name/value pair data to a FileMaker table, but it would be simple enough to create a "Log Writter: syslog" or "Log Writter: log4j" to transform that data into another format.

So, I suggest we assume the source data for a log entry is received in name/value pair format; at least until an alternative is suggested.

Also, to clarify what I am referring to by "miscellaneous set of name/value pairs":
Each set of name/value pairs is likely to contain some standard values (like logLevel or currentHostTimestamp), it may also contain values that occur often, but not always (errorCode, errorDescription), and it may even contain custom values that only occur once (like theValueOfSomeLocalVariable at the time the log data was generated).


--

Jeremy Bante

unread,
Dec 3, 2013, 12:26:03 AM12/3/13
to
On Tuesday, November 26, 2013 2:22:11 PM UTC-5, dansmith65 wrote:
So, I suggest we assume the source data for a log entry is received in name/value pair format; at least until an alternative is suggested.

The format that the source data for a log entry is received in is not necessarily the format that the log entry data are recorded or exported in. The most universal feature of the different logging standards, formats, and tools I've looked at so far is block of text for whatever message the application chooses to record — exactly the kind of place we could hypothetically dump a string of Let notation into. Let notation is quickly becoming the most popular dictionary format for passing data around within FileMaker applications, but there are external forces worth considering. We have the influence to modify the error data packet/struct/thing functions to suit whatever other standards, formats, and tools we decide are important. We are also able to transform the error data packet/struct/thing from Let notation to whatever format we choose, which could just as reasonably happen before logging an event as after. We do not have the same influence over the existing standards, formats, and tools. So any requirements we choose to impose on ourselves to support interacting with logging systems that other folks have already devised are the more rigid constraints to consider, and I believe the more fruitful place to start the conversation.

Matt Petrowsky

unread,
Dec 3, 2013, 1:29:29 AM12/3/13
to fmsta...@googlegroups.com
I don't know that you guys are interested or have access to the magazine site, but I can give you access if you want it. The video I shot about Error handling uses a logging format where I simply use the name of the field itself to extract the value from local variable with the exact same name as the field. This allows me to simply duplicate a field and rename as needed. Very low friction.


Let me know if you want access to preview. Otherwise the code used within the auto-enter of a log field is the following.

Evaluate ( "$" & GetValue ( Substitute ( GetFieldName ( Self ) ; "::" ; ¶ ) ; 2 ) )

With everything in Let format and ready to be #Assign(ed) I just pass it along within scripts and log it straight into fields.


On Mon, Dec 2, 2013 at 9:23 PM, Jeremy Bante <jeremy...@gmail.com> wrote:
On Tuesday, November 26, 2013 2:22:11 PM UTC-5, dansmith65 wrote:
So, I suggest we assume the source data for a log entry is received in name/value pair format; at least until an alternative is suggested.

The format that the source data for a log entry is received in is not necessarily the format that the log entry data are recorded or exported in. The most universal feature of the different logging standards, formats, and tools I've looked at so far is block of text for whatever message the application chooses to record — exactly the kind of place we could hypothetically dump a string of Let notation into. Let notation is quickly becoming the most popular dictionary format for passing data around within FileMaker applications, but there are external forces worth considering. We have the influence to modify the error data packet/struct/thing functions to suit whatever other standards, formats, and tools we decide are important. We do not have the same influence over the other standards, formats, and tools. So any requirements we choose to impose on ourselves to support interacting with logging systems that other folks have already devised are the more rigid constraints to consider, and I believe the more fruitful place to start the conversation.

Jeremy Bante

unread,
Dec 3, 2013, 2:05:26 AM12/3/13
to
To follow-up on looking for other influences, these are some of the other logging solutions I've looked at so far:
What else is out there? Are any of these any good?

The most universal feature is a way to record whatever message the application cares to fit in a string of text for each event. Let's keep that.

The next most common feature is a timestamp for each event (usually UTC and to millisecond precision if it can be helped). Let's keep that, too.

The next most common feature is a way to categorize recorded events by a flag that usually gets called a log "level," which I personally think is an unfortunate word choice. If we want to include log levels, there is more thinking to do. The log level means different things to different logging tools and standards. Sometimes the log level is used to filter which events get recorded in the log based on some setting read at runtime — "debug" log messages might be recorded by a developer while trying to fix a broken script, but not while users are working in a deployed application so that the data logged is throttled to a reasonable quantity. Sometimes all log messages are recorded, and the log level is used to filter the display of events, or to log different events to different places. Sometimes the log level indicates the appropriate priority and urgency of action to take in response to an event. Sometimes the log level is nominal data — just a categorical tag — and sometimes the log levels are ordinal data — the different log levels have a particular order relative to each other. What do folks think about these different approaches? Should we adopt one of the existing level schemes as our own, pick & choose, or come up with something entirely new? If we want to interact with any formats, do we adopt the most thorough approach and pare the information down when exporting to a less detailed standard, or adopt the simplest and rely on some extensibility mechanism?

Some other common features include:
  • Means to identify the source of the log message, ranging in detail from an IP address to an application name to a complete stack trace.
  • A status or error code for each event.
One uncommon but tempting feature is the ability to record structured arbitrary fields of data for each event.

Jeremy Bante

unread,
Dec 3, 2013, 2:04:17 AM12/3/13
to fmsta...@googlegroups.com
On Tuesday, December 3, 2013 1:29:29 AM UTC-5, matt wrote:
I don't know that you guys are interested or have access to the magazine site, but I can give you access if you want it. The video I shot about Error handling uses a logging format where I simply use the name of the field itself to extract the value from local variable with the exact same name as the field. This allows me to simply duplicate a field and rename as needed. Very low friction.

I did see that and liked it. In a project I'm working on I made a variation on the same idea that puts the variable-field matching logic in the logging script instead of in the schema so that more of the logging logic stays in one place, but there's no more friction to adding fields/log parameters than in your version. I haven't decided which I like better.

Daniel Smith

unread,
Dec 3, 2013, 2:40:25 AM12/3/13
to filemakerstandards.org
Matt,
I don't have access to filemakermagazine.com, but I would like to check out that video and sample file you made. If you could give me access, I would appreciate it.
The last change I made to the sample file used a similar idea of auto-entering a value in fields based on the field's name. I've still stuck with the ScriptLog/ScriptLogItem relationship for displaying all the name/value pairs, but I wanted to save a few values in the ScriptLog table, for viewing as a list. I'd like to see how you handled logging of miscellaneous names you don't have a field for.

Jeremy,
UTC timestamp's sound like a good idea. The only way I know how to get this without a plug-in is via Get ( UTCmSecs ), which is based on the client's computer's clock (I assume!). I think it would be more consistent to base a timestamp on the server's clock via Get ( CurrentHostTimestamp ). Unless there is another way to get sub-second UTC time, I'm not sure that it would add any value over the server's time since each client's clock may be slightly different from one another.
Actually, thinking through this as I type, the added value of using Get ( UTCmSecs ) would be that log entries created by a single client could be compared to each other with sub-second increments.

Regarding log level, I suggest we include it so it can be used for every purpose you listed. Not everyone will need or want to use it for all those reasons, but by having it there, it gives them the ability to do so.


On Mon, Dec 2, 2013 at 11:52 PM, Jeremy Bante <jeremy...@gmail.com> wrote:
To follow-up on looking for other influences, these are some of the other logging solutions I've looked at so far:
What else is out there? Are any of these any good?

The most universal feature is a way to record whatever message the application cares to fit in a string of text for each event. Let's keep that.

The next most common feature is a timestamp for each event (usually UTC and to millisecond precision if it can be helped). Let's keep that, too.

The next most common feature is a way to categorize recorded events by a flag that usually gets called a log "level," which I personally think is an unfortunate word choice. If we want to include log levels, there is more thinking to do. The log level means different things to different logging tools and standards. Sometimes the log level is used to filter which events get recorded in the log based on some setting read at runtime — "debug" log messages might be recorded by a developer while trying to fix a broken script, but not while users are working in a deployed application so that the data logged is throttled to a reasonable quantity. Sometimes all log messages are recorded, and the log level is used to filter the display of events, or to log different events in different places. Sometimes the log level indicates the appropriate priority and urgency of action to take in response to an event. Sometimes the log level is nominal data — just a categorical tag — and sometimes the log levels are ordinal data — the different log levels have a particular order relative to each other. What do folks think about these different approaches? Should we adopt one of the existing level schemes as our own, pick & choose, or come up with something entirely new? If we want to interact with any formats, do we adopt the most thorough approach and pare the information down when exporting to a less detailed standard, or adopt the simplest and rely on some extensibility mechanism?

Some other common features include:
  • Means to identify the source of the log message, ranging in detail from an IP address to an application name to a complete stack trace.
  • A status or error code for each event.
One uncommon but tempting feature is the ability to record structured arbitrary fields of data for each event.

--

Jeremy Bante

unread,
Dec 3, 2013, 6:36:23 PM12/3/13
to fmsta...@googlegroups.com
On Tuesday, December 3, 2013 2:40:25 AM UTC-5, dansmith65 wrote:
The last change I made to the sample file used a similar idea of auto-entering a value in fields based on the field's name. I've still stuck with the ScriptLog/ScriptLogItem relationship for displaying all the name/value pairs, but I wanted to save a few values in the ScriptLog table, for viewing as a list. I'd like to see how you handled logging of miscellaneous names you don't have a field for.

For an application I'm working on now, we opted to just dump the Let notation for the remaining key-value pairs not recorded to their own dedicated fields in a reserved miscellaneous field, rather than the EAV approach. It's not my favorite thing in the world, but it's expedient.

For the purposes of coming up with a standard, I wonder if we'd be better off focusing on the interface FileMaker applications use to send events to the log rather than the implementation, following the example of the Script Parameter Interface which is careful not to specify a data serialization format.
 
Jeremy,
UTC timestamp's sound like a good idea. The only way I know how to get this without a plug-in is via Get ( UTCmSecs ), which is based on the client's computer's clock (I assume!). I think it would be more consistent to base a timestamp on the server's clock via Get ( CurrentHostTimestamp ). Unless there is another way to get sub-second UTC time, I'm not sure that it would add any value over the server's time since each client's clock may be slightly different from one another.
Actually, thinking through this as I type, the added value of using Get ( UTCmSecs ) would be that log entries created by a single client could be compared to each other with sub-second increments.

That will be the Get ( CurrentTimeUTCMilliseconds ) function now. I've heard that the accuracy of the sub-second timing might be questionable, but it could still be better than nothing. For hosted solutions, log entries between clients could be roughly compared since they are created in the table in the order of occurrence anyway. The millisecond precision is more interesting to me for logging execution times of different processes and user reaction times. Even if it is imperfect, it's better than whole-second precision.

UTC is useful for ordering events that happen on devices running in different time zones. For a hosted solution, Get ( CurrentHostTimestamp ) makes this a non-issue; but distributed solutions that sync data intermittently are an increasingly prevalent part of the landscape. You do raise a valid concern about clock drift. It isn't hard to calculate an adjustment to UTC from the difference between Get ( CurrentTimestamp ) and Get ( CurrentHostTimestamp ) (after accounting for possible time zone differences between client and server). Alternately, the client's opinion of UTC and Get ( CurrentHostTimestamp ) could both be stored. Who knows when clock drift data may have some diagnostic use? Either answer sounds like an implementation detail more than part of the interface to standardize.

Daniel Smith

unread,
Dec 3, 2013, 7:20:15 PM12/3/13
to filemakerstandards.org
For the purposes of coming up with a standard, I wonder if we'd be better off focusing on the interface FileMaker applications use to send events to the log rather than the implementation

I think that's a good idea. I propose we use the interface implemented in my sample file. I designed the implementation around that exact idea: everyone may use different "Log Writers" (as I called them in the sample file), but the generic "Logger: Create Entry ( logData )" script provides the entry point for the creation of all log entries.
I also propose the use of the LogData custom function as the standard method of collecting environmental data for logging.

Either answer sounds like an implementation detail more than part of the interface to standardize.

I completely agree. I think there are valid use cases for both Get ( CurrentTimeUTCMilliseconds ) and Get ( CurrentHostTimestamp ) and the standard shouldn't say that users should always use one or the other. That being said, I really like your idea of recording both values.

This is an implementation idea, but I'm really excited about how the new script step "Perform Script on Server" can be used in logging. In my sample file, I had to change layouts to a logging table in order to create a log entry, but with the use of this new script step, the client can just send the request to the server (not even bothering to wait for a response) and not have to worry about changing/restoring context. For FM13 solutions, this has the potential to drastically reduce the overhead of logging.


--

Jeremy Bante

unread,
Dec 10, 2013, 6:39:15 PM12/10/13
to fmsta...@googlegroups.com
On Tuesday, December 3, 2013 7:20:15 PM UTC-5, dansmith65 wrote:
For the purposes of coming up with a standard, I wonder if we'd be better off focusing on the interface FileMaker applications use to send events to the log rather than the implementation

I think that's a good idea. I propose we use the interface implemented in my sample file. I designed the implementation around that exact idea: everyone may use different "Log Writers" (as I called them in the sample file), but the generic "Logger: Create Entry ( logData )" script provides the entry point for the creation of all log entries.
I also propose the use of the LogData custom function as the standard method of collecting environmental data for logging.

The interaction with a script of "Here's some arbitrary dictionary data, now log it and leave me alone" looks about right, and each application using a LogData function for the data it likes to log looks about right. However, it isn't clear from your sample file which data are a mandatory part of what we should expect to see in every standard-abiding implementation, and which data are just there because they might be handy for your particular sample. Logging different data for different log levels is a nice implementation touch, but perhaps not something to standardize on. In the interest of promoting adoption, the smaller pill we give folks to swallow, the better; so how little data can we make a mandatory part of a log and still have a useful log?

I mentioned timestamps already. I think that should be included in the logged data, but a logging script can figure out the timestamp for itself if it isn't already given, and this is what we see in other logging solutions. Maybe including a timestamp is a best practice rather than a standard. Even if it is a standard, I imagine many implementations will tolerate its absence anyway. Should we standardize the name of the variable? timestamp? eventTimestamp? logTimestamp? To be determined by the implementation?

A log level is pretty common. But a logging script might fall back on some reasonably innocuous default in the absence of a specified value.

Some general purpose message is left, but when enough other data is provided, that might not necessarily be needed either.

Thoughts?

Daniel Smith

unread,
Dec 10, 2013, 8:09:17 PM12/10/13
to filemakerstandards.org
I don't know that any data should be required. In my opinion, I think the data to be logged should be stated as suggestions (best practice maybe?) rather than a standard.

Taking timestamp as an example, I think it's pretty obvious that a timestamp is useful but as has been discussed, it can be retrieved many different ways. I don't think it's necessary to standardize on a name for a timestamp because I think it's appropriate for it to take on a different name based on how it's collected (CurrentHostTimestamp vs CurrentTimestamp).

I agree with log level, it's pretty common and we could state it as a best practice, but I don't see any reason to require it. Same goes with a general purpose message.

I like the idea that a log entry can contain as much or as little data as the developer chooses.

On the topic of implementation, an idea I've had for a while is to use a web viewer to view the name/value pairs. It could decode them and display in an html table. If the value contains encoded name/value pairs it could either recursively expand all values, or it could use javascript to expand the value when the user selects it. This method provides a way of viewing the data without having to break it out into separate fields/records. It doesn't help much for searching, though.


--

Jeremy Bante

unread,
Dec 11, 2013, 2:48:40 PM12/11/13
to fmsta...@googlegroups.com
If we're talking about best practices rather than standards, I'd say that substantially lowers the threshold of acceptance for each practice. I like it. I don't hear anyone complaining about accepting arbitrary dictionaries of data to suit each applications whims. We've already brought up 3 values that solutions really ought to have special treatment for, but with exceptions. What else is there that goes beyond "I find this data point handy sometimes, but you can take it or leave it" to "you really ought to track this if you want to be taken seriously" that we haven't brought up in this thread yet? Current layout (ID) comes to mind for the former category, and account name comes to mind for the latter.

Daniel Smith

unread,
Dec 11, 2013, 3:14:44 PM12/11/13
to filemakerstandards.org
Data I strongly suggest everyone collect with every log entry:
  • LogLevel
  • TimeStamp
  • AccountName
  • ScriptName
  • ScriptParameter
For the broader list, the LogData and Error custom functions contain suggested sets of data to log, depending on the situation. These list's are on the verbose side, though.


On Wed, Dec 11, 2013 at 12:48 PM, Jeremy Bante <jeremy...@gmail.com> wrote:
If we're talking about best practices rather than standards, I'd say that substantially lowers the threshold of acceptance for each practice. I like it. I don't hear anyone complaining about accepting arbitrary dictionaries of data to suit each applications whims. We've already brought up 3 values that solutions really ought to have special treatment for, but with exceptions. What else is there that goes beyond "I find this data point handy sometimes, but you can take it or leave it" to "you really ought to track this if you want to be taken seriously" that we haven't brought up in this thread yet? Current layout (ID) comes to mind for the former category, and account name comes to mind for the latter.

Jeremy Bante

unread,
Dec 11, 2013, 4:15:24 PM12/11/13
to fmsta...@googlegroups.com
I think the script parameter can be overkill for entries that aren't for error tracking or debugging. What about layout name?

Jeremy Bante

unread,
Dec 11, 2013, 4:23:30 PM12/11/13
to fmsta...@googlegroups.com
I haven't heard any objections to the LogData function. That seems like a pretty solid practice. However, I've seen several variations on it. LogData with no parameters. LogData ( logLevel ). LogData ( message ; logLevel ). A few non-FileMaker solutions seem to like having a separate logging function for each logLevel. Any favorites?

Malcolm Fitzgerald

unread,
Dec 11, 2013, 6:06:50 PM12/11/13
to fmsta...@googlegroups.com
On 12 Dec 2013, at 7:14 am, Daniel Smith <dansm...@gmail.com> wrote:

Data I strongly suggest everyone collect with every log entry:
  • LogLevel
  • TimeStamp
  • AccountName
  • ScriptName
  • ScriptParameter

I always include a label and a message, which is a text string. It can be more or less generic, and in some complex scripts the message sometimes contains a number which identifies the error block or sometimes the loop count. 

I wouldn’t bother with ScriptParameter because it’s my practice to ensure that required script parameters are present and meet the requirements before the script begins. 

Layout name has been suggested. Like record id, found count, record number, etc I would think it was optional. In my logging those things simply get pushed into the message string as required.

Another practice that I have is to log a full set of environment variables at login, as errors only occur after that happens. They are collected as name/value pairs in a return separated list and passed to the standard log script as a message. It’s good enough for reference purposes and it ensures that there is a record in case the problem that the user is complaining about isn’t generating an error. 

Malcolm
signature.asc

Daniel Smith

unread,
Dec 11, 2013, 7:28:56 PM12/11/13
to filemakerstandards.org
I agree, ScriptParameter isn't always required. I think the same applies to LayoutName/Id, though. Here's an example scenario that wouldn't require LayoutName:

A script that receives data via the parameter, processes that data (possibly via ExecuteSQL), then returns the processed data. In this case, the current layout was irrelevant to the operation of the script.

I agree that a message of some sort should be included, but the message may just be a set of specific name/value pairs. Malcolm said he adds data like the current loop iteration count to a message string, but I usually add that as a separate name/value pair. Here's an example where the "message" is a name/value pair:

When measuring performance of a script, I would personally create a simple log entry with these values: LogLevel, TimeStamp, AccountName, ScriptName, RunTime.

Matt Petrowsky

unread,
Dec 12, 2013, 3:39:00 AM12/12/13
to fmsta...@googlegroups.com

Jeremy, given your favoritism towards singular functions I would think that the following a format would be something you would like.

- LogError()
- LogWarn()
- LogNotice()
- LogDebug()

etc...

On Dec 11, 2013 1:23 PM, "Jeremy Bante" <jeremy...@gmail.com> wrote:
I haven't heard any objections to the LogData function. That seems like a pretty solid practice. However, I've seen several variations on it. LogData with no parameters. LogData ( logLevel ). LogData ( message ; logLevel ). A few non-FileMaker solutions seem to like having a separate logging function for each logLevel. Any favorites?

Jeremy Bante

unread,
Dec 12, 2013, 3:06:46 PM12/12/13
to fmsta...@googlegroups.com
I have a weak personal bias against passing commands as parameters, and enumerated values more generally. Passing a log level as a parameter to a function doesn't seem to raise my heckles so much, though. I'm not sure why. One scenario that I have more cause for concern would be for ordinal log levels: it would make sense to represent them numerically instead of with text flags (such as with syslog), which could lead to some confusing code:

LogData ( "Weird stuff is happening" ; 4 )    // what the hell does "4" mean!?

But this particular issue is just as easily addressed by creating a series of constants as custom functions to label the numeric values:

LogData ( "Weird stuff is happening" ; LogLevelWarning )    // vs LogWarning ( "Weird stuff is happening" )

I'm not proposing any one approach — just exploring what might be reasonable. If we stumble on something that's inter-subjectively better, cool. If not, that's OK, too.

Jeremy Bante

unread,
Dec 12, 2013, 3:25:21 PM12/12/13
to fmsta...@googlegroups.com
On Wednesday, December 11, 2013 6:06:50 PM UTC-5, Malcolm Fitzgerald wrote:

Another practice that I have is to log a full set of environment variables at login, as errors only occur after that happens. They are collected as name/value pairs in a return separated list and passed to the standard log script as a message. It’s good enough for reference purposes and it ensures that there is a record in case the problem that the user is complaining about isn’t generating an error.

I like that. It seems to me to imply a need for a session ID to be included in the log so those environmental variables can be related to the log entries that don't contain those data — a Huffman coding of log data, in effect.

I agree with Malcolm that in simple situations, it can make sense to keep the log as simple as the situation being logged by just putting the important details in the log message. I also agree with Dan that there's value in keeping the log message/description/headline/thingy distinct from additional arbitrary dictionary data, or the log may become difficult to navigate. Keep it simple, but keep your options open.

Malcolm Fitzgerald

unread,
Dec 12, 2013, 6:24:39 PM12/12/13
to fmsta...@googlegroups.com

On 13 Dec 2013, at 7:25 am, Jeremy Bante <jeremy...@gmail.com> wrote:

On Wednesday, December 11, 2013 6:06:50 PM UTC-5, Malcolm Fitzgerald wrote:

Another practice that I have is to log a full set of environment variables at login, as errors only occur after that happens. They are collected as name/value pairs in a return separated list and passed to the standard log script as a message. It’s good enough for reference purposes and it ensures that there is a record in case the problem that the user is complaining about isn’t generating an error.

I like that. It seems to me to imply a need for a session ID to be included in the log so those environmental variables can be related to the log entries that don't contain those data — a Huffman coding of log data, in effect.

That’s a good idea. In practice, when I’ve needed to use the log, I’m liaising with someone who can answer questions. It isn’t too hard to locate the session login data and relate it to the error conditions but a session ID would make that trivial.

Malcolm

dansmith65

unread,
Jan 11, 2014, 12:08:40 AM1/11/14
to fmsta...@googlegroups.com
I like this idea because it makes the available log levels extremely obvious. Let's say one person uses the log levels: "error", "warn", and "debug" then another person uses: "alert", "error", and "info"; if they both happened to work on the same database they would have a heck of a time remembering what log level to use and the order/meaning of each log level.

I might even suggest the following to make the order of severity clear:
  • Log1Error()
  • Log2Warn()
  • Log3Info()
  • Log4Debug()
  • Log5Trace()

Matt Petrowsky

unread,
Jan 12, 2014, 1:40:58 AM1/12/14
to fmsta...@googlegroups.com
If you look at this commit on my pull request I have simplified the
LogData function to use numeric values without specifying what they
include. I would leave it variable and we just cover the internal
basics.

https://github.com/petrowsky/fmpstandards/commit/177f2f3200896e6edffb9f4a680911a8a2c45de1

Jeremy Bante

unread,
Jan 12, 2014, 12:38:24 PM1/12/14
to fmsta...@googlegroups.com
I like it.

One thing I noticed looking at other event logging schemes is that numeric log levels usually represent increasingly critical events as the log level number decreases. The lower the log level, the more a fast resolution to a problem might warrant detailed diagnostic data (if we chose to embrace a similar convention). In light of that, it might make sense for any extra dictionary data to be included when log levels are less that certain thresholds rather than exact matches.

Matt Petrowsky

unread,
Jan 12, 2014, 11:28:41 PM1/12/14
to fmsta...@googlegroups.com
Yeah, I was just lazy in the use of < or >.

I was working on getting it into the Standards file as I'm trying to bring that up-to-date. I use it to copy from.

What do you guys use?

On Sunday, January 12, 2014, Jeremy Bante wrote:
I like it.

One thing I noticed looking at other event logging schemes is that numeric log levels usually represent increasingly critical events as the log level number decreases. The lower the log level, the more a fast resolution to a problem might warrant detailed diagnostic data (if we chose to embrace a similar convention). In light of that, it might make sense for any extra dictionary data to be included when log levels are less that certain thresholds rather than exact matches.

--
Reply all
Reply to author
Forward
0 new messages