def getButtons(self):
'''Return a list of tuples (x,y) for common @button nodes.'''
return g.app.config.atCommonButtonsList # unusual.
def getCommands(self):
'''Return the list of tuples (headline,script) for common @command nodes.'''
return g.app.config.atCommonCommandsList # unusual.
def getEnabledPlugins(self):
'''Return the body text of the @enabled-plugins node.'''
return g.app.config.enabledPluginsString # unusual.
def getRecentFiles(self):
'''Return the list of recently opened files.'''
return g.app.config.getRecentFiles() # unusual
if 0:
# Not necessary **provided** that theme .leo files
# set @string theme-name to the name of the .leo file.
g.app.theme_color = settings_d.get_string_setting('color-theme')
g.app.theme_name = settings_d.get_string_setting('theme-name')Given the complexities of the code, I simply do not understand how anyone can be confident that proposed changes would work exactly as before. After 15 hours of intense study, I don't understand the code well enough to have any confidence in any real changes to the code.
I don't understand the code well enough to have any confidence in any real changes to the code.
If you, a rather clever, smart, experienced developer and also the author of this code, after 15 hours of intense study do not fully understand the code, shouldn't that tell you something about the quality of the code? If you cant fully understand the code, what chances to understand it will have the rest of us.
Now what I really don't understand is your decision not to fix this code even after you yourself had learned that it is far from being readable and maintainable code. Why don't you make it simple and readable?
I could help if you need any help.
I don't understand the code well enough to have any confidence in any real changes to the code.That is precisely why the real changes are necessary. I would say I don't have any confidence that it works correctly at all in its current state. That should be well enough reason to refactor this code, no matter how much time and effort will it take. But I guess, you'll prefer to leave this code as it is.
I am tempted to put aside all my current tasks and write my own launchLeo.py which will monkey patch completely configuration code and just read settings from the database table. I guess it would greatly reduce number of code lines that has to be executed every time Leo opens new commander.
The aso.start method, and its helpers, now appears to recreate the required initing of settings exactly as is done in Leo's startup code. Subtle bugs will abound if it doesn't.
Insulting comments about the code's quality shows engineering immaturity. Yes, the code is difficult. That in no way proves that it could be simplified safely.
I would be happy to make the code simpler if I knew how to do this without damaging Leo.
To gain my approval, you would have to convince me of the following:Requirement 1. That users presently are suffering from configuration related bugs, that can not be fixed by small changes to the present code base.Requirement 2. That you have a solid plan to avoid any proposed refactoring from introducing changes, of any kind, in how users perceive Leo's settings.
Insulting comments about the code's quality shows engineering immaturity. Yes, the code is difficult. That in no way proves that it could be simplified safely.I am truly sorry if you read my comments as insulting. That was never my intent. Do you really think I would try to get your approval for anything while at the same time insulting you? That would be rather foolish from me.
I am tempted to put aside all my current tasks and write my own launchLeo.py which will monkey patch completely cofiguration code and just read settings from the database table. I guess it would greatly reduce number of code lines that has to be executed every time Leo opens new commander.
HOME=$HOME/tmp/blank python launchLeo.py ~/tmp/new.leo
Traceback (most recent call last):File "/home/btheado/src/leo-editor/leo/plugins/qt_frame.py", line 3369, in tab_callbackc.findCommands.startSearch(event=None)File "/home/btheado/src/leo-editor/leo/core/leoFind.py", line 596, in startSearchself.openFindTab(event)File "/home/btheado/src/leo-editor/leo/core/leoFind.py", line 531, in openFindTabg.app.gui.openFindDialog(c)File "/home/btheado/src/leo-editor/leo/plugins/qt_gui.py", line 209, in openFindDialogd.setStyleSheet(c.active_stylesheet)AttributeError: 'Commands' object has no attribute 'active_stylesheet'
This is a breakthrough, in two, no three ways:
>
> Vitalije, should you choose to accept it, I would like to name you as lead
> engineer for the prototyping unit test. This will acquaint you with all
> the difficulties, and all the possibilities :-)
>
I gladly accept the challenge. I am not sure what you mean by unit test,
but here is how I see it.
I will make a python module with the following functions:
A unit test then can look something like this:
import mymodule
conf = mymodule.make_conf(['leoSettings.leo', 'myLeoSettings.leo',
'activeUnitTest.leo'])
d = c.config.settingsDict
for name, gs in d.items():
assert conf.get(name, gs.kind) == gs.val
A couple of other tests should prove that save_conf/load_conf work as expected.
If this succeeds then Leo can use this module in the following way. Instead of editing myLeoSettings.leo, user can execute 'show-current-settings-tree' which will dynamically show entire tree of settings where user can edit any
setting, and then execute 'update-current-settings-and-close' and this tree will disappear, but changes will be stored for the next run, with the same effect as if myLeoSettings.leo was edited.
Show current settings tree should generate outline similar to leoSettings.leo with settings grouped by category.
That will allow us to direct user how to find any given settings. Every user will see the same shape of the settings outline. There could be a variation that will show only settings that user has changed comparing to Leo defaults. Or a variation where only settings with given search-string are shown filtering all other settings out.
Your comments please.
Vitalije
I gladly accept the challenge.
I am not sure what you mean by unit test, but here is how I see it.
I will make a python module with the following functions:
- parse_settings_tree(vnode) - returns a generator of settings in the subtree of given vnode. Yields tuples (type, value)
- make_conf(filenames) - returns an object with the single method:
- get(name, kind) - replacement for c.config.get
- save_conf(filename)
- load_conf(filename)
- show_current_settings_tree(c, p) - this will rebuild settings outline with all values in effect as a first child of p, so that user can edit settings and root of this outline will have heading '@current-settings'
- update_current_settings_and_close(c) - finds node `@current-settings`, process them and updates values in c.config and perhaps stores them in ~/.leo/config.db
A unit test then can look something like this:import mymodule
conf = mymodule.make_conf(['leoSettings.leo', 'myLeoSettings.leo', 'activeUnitTest.leo'])d = c.config.settingsDict
for name, gs in d.items():
assert conf.get(name, gs.kind) == gs.val
A couple of other tests should prove that save_conf/load_conf work as expected.
If this succeeds then Leo can use this module in the following way. Instead of editing myLeoSettings.leo, user can execute 'show-current-settings-tree' which will dynamically show entire tree of settings where user can edit any setting, and then execute 'update-current-settings-and-close' and this tree will disappear, but changes will be stored for the next run, with the same effect as if myLeoSettings.leo was edited.
Show current settings tree should generate outline similar to leoSettings.leo with settings grouped by category.
Your comments please.
I am not sure what you mean by unit test, but here is how I see it.In this context, a unit test is simply a script that verifies, in some unspecified way, that exiting code works. It can also be applied to any proposed refactoring/simplification of the code.
To do both, the unit test should be self contained. That is, it should not use existing code, but instead work from "first principles", that is, the highest-possible description of what is intended. I think of such tests as a form of double-entry bookkeeping.
The script complements Leo's existing unit tests. In my mind, the script should be substantially more ambitious. It should test, as much as possible, all of Leo's settings, verifying that settings work as expected no matter where they are defined. This kind of test might take minutes.