------------------------------------------------------------------------------ Increase Visibility of Your 3D Game App & Earn a Chance To Win $500! Tap into the largest installed PC base & get more eyes on your game by optimizing for Intel(R) Graphics Technology. Get started today with the Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs. http://p.sf.net/sfu/intelisp-dev2dev
I rate the proposal a 7 out of 10.
What I liked about it:
- It provides a good summary of where we are today.
- It stresses the fact that we should avoid Java upgrades
- It stresses the fact we should decouple the application from
database upgrade process.
- The seperation of 'contraction' and 'expansion' maybe useful
What I think will make it 10:
Indicate more clearly how we can get from where we are today to the
proposed goal of 'seamless database upgrades'. What I mean is from the
13 points listed, which do you think we can start doing today? Any
quick wins? What points would we need to get to before reaching our
end goal?
I would like to understand how Liquibase is now the tool of choice? I
understand there has being talk of it for a while now but I don't know
if anyone on the team has expierence of the tool or performed any
spike and indicated any learnings to the team.
I would like to understand how important it is to have near zero down
time (Is this mostly driven by cloud customers?)
Undo scripts: Are they needed for 'rolling back' to previous state.
Are there other feasible approaches?
I also feel this proposal is getting quiet big. Is there any value/way
in splitting it up a bit?
Keith.
Note Vivek's previous mail answering my question about this - they
only undo the results of the previous expansion, so they will be
relatively easy - delete columns or tables. Once a contract has been
made, the undo won't work.
This is done so we can move to a new version, run for some time (a few
weeks, say) and then roll back to the previous version without much
problem. But you can't roll back through multiple versions.
-adam
--
Adam Feuer <adamf at pobox dot com>
Kojo,
Have you checked out the earlier link Vivek posted?
http://exortech.com/blog/2009/02/01/weekly-release-blog-11-zero-downtime-database-deployment/
The expand scripts will only add data - columns, tables, etc. To undo
this, one can simply delete whatever was added. Furthermore, as Vivek
said, Liquibase automatically generates rollbacks:
http://www.liquibase.org/manual/rollback
These can be used for the "expand-undo" scripts.
However, once a contract operation has been performed, rollback cannot
be done without custom work to generate data. The idea here is to run
for some time in in the expanded state to verify everything is well -
only then will the contract be performed.
> 2. Which is cheaper? Given that we are increasing our QA efforts we are
> definitely reducing the likelihood of severe bugs in releases. Wouldn't it
> be cheaper fixing any issues that come up in production versus rolling back?
It is very very difficult to fix some issues that are discovered in
production. Furthermore, many issues are discovered by MFIs only after
one or more weekly cycles have happened. This means that to roll back
using the current software, we would have to do one of two costly
things:
1. restore an old backup from a week or more in the past; then pay for
the MFI to re-enter all their data. Painful and slow for the MFI.
or
2. write data migration scripts that will undo the upgrade, while
preserving the week or more of data the MFI has entered. This would be
stressful and painful for us, and slow for the MFI.
We have experienced first hand the pain of both solutions, and do not
want to undergo that pain again.
While it would be great to have no bugs - and we are striving for that
- it is better to have backup systems that make bad bugs non-fatal to
us, and non-fatal to customers. This is especially important the more
customers we have.
Does that make sense?
cheers
adam
--
Adam Feuer <adamf at pobox dot com>
------------------------------------------------------------------------------
What happens now with your Lotus Notes apps - do you make another costly
upgrade, or settle for being marooned without product support? Time to move
off Lotus Notes and onto the cloud with Force.com, apps are easier to build,
use, and manage than apps on traditional platforms. Sign up for the Lotus
Notes Migration Kit to learn more. http://p.sf.net/sfu/salesforce-d2d
If my understanding is correct then this approach should not create data
loss issues so long as the scripts are well tested.
It gets a ten(10) from me in that case.
Kojo
I give it a 9 out of 10.
I like
- that it will help Mifos sysadmins be much more confident when
upgrading Mifos.
- that it shows you understand what it will take to actually implement
seamless database upgrades.
- that it stresses the use of off-the-shelf tools (like liquibase)
rather than more custom code.
To make it a 10,
- add: "A Mifos sysadmin will clearly be able initiate and monitor the
upgrade (contract/expand) process", and "a Mifos developer will be able
to maintain the seamless upgrade system more easily and reliably than
the current upgrade system". This will stress simplicity and usability
from both developer and user perspective.
- add "the new (seamless) upgrade mechanism (minus the novel
expand/contract feature or UI) will require less custom code than the
current ("non-sequential database upgrades") mechanism".
Other comments:
- Look carefully at the current Java-based upgrades when estimating the
"seamless database upgrades" stories. Changing to SQL-only will require
a good deal of refactoring. There are a bunch of "upgrades" that do
stuff like conditionally fix problems in data. Also, this may require
refactoring of broken i18n code and custom labels (which needs to happen
anyway). IIRC Mifos permission changes are also done in Java-based
upgrades, but this might give us a chance to use more out-of-the-box
Spring Security. I just wanted to bring these up since they'll take
time. Saying "duplicate code" or "use stored procedures" is fine, but
there's a devil in those details.
- Great work!
Yes, this is my understanding too.
> If my understanding is correct then this approach should not create data
> loss issues so long as the scripts are well tested.
Cool! That is our idea!
-adam