We are working with folks who are migrating from D3 to SQL Server using MVON. So, the target is different, but the project is likely very similar.
In case it is useful, I see these implementations as a SCIENCE. Here is a broad sketch and some tips that might be helpful.
Sys Admin
GOAL: A target environment configured properly for running a MultiValue application for starters, eventually have it ready for production.
Have your high availability, DR, security, scalable live environment ready to go (like using AlwaysOn for SQL Server), or anything else required. [I don't like sys admin, so I'm pouring all of the platform setup into one big Sys Admin category. Use Zumasys for the cloud or at least use their expertise in this area.]
Compile
GOAL: Everything compiled so the application can be launched.
Getting to 90% is fast, perhaps even amazingly fast, because it is simply another implementation of the MV data model and languages.
You might hit a few new compiler deltas (where the new compiler doesn't do exactly what the D3 compiler did with the same code and no prior D3 applications compiled in the new environment have used some construct in their source code). Those are almost always fast to resolve, whether with a quick work-around where you do a bulk change to the code, or, more typically, with a simple change to the compiler to take into consideration your unique use of the language. jBASE, like MVON, is backed by fast, smart, engineers who have seen everything and can likely help you get rid of compile errors in very short order.
The key in this phase is to keep and manage a list and the communication to get the changes in short order. I said these projects are SCIENCE, but they are all about relationships too, of course.
Initial data migration
GOAL: Export and import (save and restore account, for example) and have enough data to be able to run the application and be able to test out all of the features.
Where in the previous step, any needed changes are usually done by the vendor, this step often finds areas where some "data cleansing" is in order -- places where the data was not right in the first place but D3 was cool with it anyway. The application owner sometimes wants or needs to write a data cleansing routine so that when the data lands in the new environment, the issues that were found are resolved.
Errors, obvious ones
GOAL: Turn the application over to those who know how to use it to do the next round of testing.
The easy part of testing is the first time you launch the application in the new environment. If something has not been configured, compiled, or migrated properly. when you get started you hit an error, fix it, hit another, fix it. Any such issues should be managed like compile and data issues, identifying whether the vendor or the application owner needs to do something and managing the issues list.
Suggestion: I like using github issues with a huboard (kanban board for github issues) to manage issues across multiple organizations. Your vendor might have other approaches.
iNconsistencies, “deltas”
GOAL: Have the application and data migration in shape to be live
This is the hardest step to complete, I think. Now you have an application with data (might be a duplicate of your live data at a point in time), and it looks like it runs. Ah, but it might not be an exact match with some variation on the input or by using some feature deep in the application. There could be some hidden "deltas" (things that act differently in the new environment).
It seems to be a common issue with projects that no one wants to test a system that looks the same as the one currently running and is just a different platform. Asking (or even telling) users who know the current system to test the new one is the number 1 spot for delays with the project. Sites can sit on a very-close-but-has-not-been-fully-tested system for a long time without preparing a test plan, without getting anyone who likes to test, without getting the resources of people who know the current system well enough to test, etc.
I also put more focus on testing and managing the deltas than on "documentation" as I like doc that is a by-product of the process. However, there will be a need for good media approaches (email, written doc, meetings, video, postings) throughout the project. Document any inconsistencies between the old and the new that are planned for the new environment. In our case, for example, a site might document how to use Excel or PowerBI directly against their SQL Server data.
Change-over strategy.
GOAL: Have a high level strategy and then a detailed plan for how you are going to move from one to the other.
Sometimes sites like to populate both their D3 database and, in our case, SQL Server, using a utility to do so and start writing reports against the new environment. In other words, they might do OLTP on the old system until they are ready to move, but start by moving read-only activities to the new environment. I don't know if that is common with jBASE, as that might be because one of the reasons to get to SQL Server is because users like having access to their data or sites like the security model they can put in place to then give users access within proper parameters. But it might work well in that environment too as you can prove the data is all in the right places when having parallel data.
Some sites like to do parallel processing in the two systems, sometimes for a month, to make sure all of the i's are dotted in the new system before they stop maintaining the old. This is more common when a site cannot seem to get a priority on testing from users or if the application is so mission critical that they decide it is wise to verify a full month of transactions, reports, and sys admin (backups, etc).
The most common strategy is likely picking a "bite the bullet" date and switch over on that date, with the data migration being one key to making that work. If you do start "streaming the data" to the new environment in advance, then you can do an instant change-over, else there would be downtime to port the data.
Execute the plan and go LIVE
GOAL: Do it. Go live with your application on the new platform.
Afterwards, there are always those little things that you decide to do after the change-over. Do those now and react to anything else that crops up.
Even if you are sick of the project by this time, don’t forget the celebration. After all, projects like this are all about relationships, I mean SCIENCE.
Best wishes with your new (ad)venture. --Dawn
--
You received this message because you are subscribed to
the "Pick and MultiValue Databases" group.
To post, email to: mvd...@googlegroups.com
To unsubscribe, email to: mvdbms+unsubscribe@googlegroups.com
For more options, visit http://groups.google.com/group/mvdbms
To unsubscribe, email to: mvdbms+unsubscribe@googlegroups.com
For more options, visit http://groups.google.com/group/mvdbms
--
You received this message because you are subscribed to
the "Pick and MultiValue Databases" group.
To post, email to: mvd...@googlegroups.com
To unsubscribe, email to: mvdbms+unsubscribe@googlegroups.com
Below, not necessary in the proper order, are some of my experiences.
I did a D3 to jBASE 3 conversion in 2002-2003 on Windows server and it was a pain in the butt.
Unlike we were told. there were major differences between D3 and jBASE 3.
In 2007 we've explored a transition to jBASE 4 but based on required changes has been decided to forego any jBASE update and move to a different platform.
If you get D3 it's like getting an empty house to which you add furniture(programs) to your liking.
When you get jBASE all you get is a pile of bricks.
To understand the above metaphor you have to take into account that jBASE does not have any login procedure, no security whatsoever except for Windows login, nothing.
Beside the program conversion we had to build the account and user framework from scratch and that is no easy task. Windows login does not tell you who the user is so we had to implement our custom login that has custom access rights for each user.
While it sounds simple, in reality it took about half a year just to build a frame for accessing jBASE.
The short synopsis:
We had about 3000 basic programs.
It took about 4500 hours/person to finish the conversion that is 1 programmer full time + 3 part time for 1 year.
Note that because of the volume we had to prioritize the basic conversion therefore step 1 was to put at the beginning of each of the 3000 PICK programs a call to a subroutine which recorded in a central file each call to each program. After a week we had a list of about 500 programs to look into ASAP. If I remember correctly the average time to convert 1 program was about 50 minutes which included testing. We had 5 minutes deals as well as daylong conversions notably for programs that had dynamic calls such as
"CALL @name(…"
In my experience PICK systems are the slowest in the market.
The very unscientific test results below refer to run times for basic programs with 100 being the fastest.
Universe 100
jBASE 3 98
OpenQM 57
D3 32
Flash D3 90 (inferred from other people test results)
jBASE pretends that being compiled in an executable form the programs are somehow better or at least better performing. Baloney.
There is absolutely no advantage. The programs are not portable between machines of different architecture or different operating systems and you are dependent on a third party C compiler. jBASE compiles the basic source code into C then invokes the resident C compiler to build the executable.
Once made an executable, the program has to run together with the "license" program. Every so often the program communicates with the "license" and will abort if the license is not present so build here and run there is just an illusion.
Before buying, the jBASE PR department was all milk and honey with unrealistic promises such as "restore the PICK save tape and voila everything works".
After we paid the dough all we've got was "jBASE is not PICK" and "This is not a bug, it's a feature". No bugs we've reported have ever been fixed.
As a bonus they had a program called PORTBAS that supposed to smooth any conversion issue but in practice proved to be of very limited use.
Before jumping in, I suggest you dip your toes into jBASE world by downloading their free developer version. You should try to develop a simulation of your current system and see what problems you encounter.
I know that the life altering decisions are often made by people who have no clue beyond the sales pitch but in this case at least you have the opportunity to make a case based on your own experience.
I know nothing about current version jBASE 5 but here are some points to take into account about jBASE 3 and 4.
1) jBASE has a lot of reserved words therefore we had to change many variable names in about 3000 programs. Some program names had to be changed because were incompatible with jBASE.
2) Some instructions are not supported for example TCLREAD, some have a slightly different syntax such as
IF A
THEN
has to be changed to
IF A THEN
some have different functionality because jBASE is case sensitive and does not have any CASING ON/OFF instruction.
3) Different SYSTEM() values
Different OCONV/ICONV conversion codes
Different INCLUDE syntax
Different Q pointers
4) Because jBASE is case sensitive we had problems when users entered commands with the wrong case therefore we had to build our own command interpreter similar to JSH and also made sure that all inputs were converted to upper case when necessary.
4) There is no "phantom" or job scheduler, we had to write our own.
5) Programs are searched in the system path therefore MD pointers to programs are no longer functional. In our case we've moved all common programs in a "common" directory that is shown in the path after the current directory.
6) As of jBASE 4 all programs are loaded in dll's therefore development is a chore because each time you compile, in order to run the latest version, you have to log off then log on again.
7) As of jBASE 4, the named common is no longer working as per the manual so we were unable to use it to transfer data between programs so we aborted any attempt to "upgrade" to jBASE 4.
It's too long ago so I don't recall every issue I had but I can tell you there were many.
Of course the decision makers were frustrated because the rose sales pitch and the reality were so far apart to the point of accusing us programmers of sabotage but in the end we found other organizations that were able to confirm the myriad of problems we had.To unsubscribe, email to: mvdbms+unsubscribe@googlegroups.com
--
You received this message because you are subscribed to
the "Pick and MultiValue Databases" group.
To post, email to: mvd...@googlegroups.com
To unsubscribe, email to: mvdbms+unsubscribe@googlegroups.com
--
You received this message because you are subscribed to
the "Pick and MultiValue Databases" group.
To post, email to: mvd...@googlegroups.com
To unsubscribe, email to: mvdbms+un...@googlegroups.com
<snip> Perhaps you don't understand that even a simple actuarial loop calculation will involve time slice paging to and from disc.
All I see you do is to talk about D3 versus your
expectations and about how many satisfied clients you have. Good for you.
If you have numbers that compare in a fare way various MV systems and not just
pub talk please be free and publish them.
For me fare way means testing with some real life programs that do something useful
not just add A to B for a million times.
Of course things may have changed in the last 12 years and I don't pretend that
I am know-all like you and you are right that I am kind of stupid and I didn't
know "that even a simple actuarial loop calculation will involve time
slice paging to and from disc."
Well, I think that Tom Jeske got enough food for thought so as far as I am concerned
I consider this thread closed unless there are some hard numbers to look at.
Have a nice day.
HI all,
It is unfortunate that, even after all these years, there is no acceptable standard benchmark test for multivalue that reflects a real application mix of computation and file i/o. It is fairly simple to construct a test that shows product A and being faster than product B or the other way around if you do this based on knowledge of the internal architecture of the products. The majority of comparison tests that I have seen are based on totally unrealistic repeated operations such as building a 10Mb string by appending one character at a time or copying a large file in a manner that might be significantly affected by differences in hash order.
We recently undertook some detailed performance analysis with the aim of making QM faster than one of the other major products for a comparison based on a set of tests that were largely dictated by the client. Aside from pointing us at some areas where we could make improvements (most of which have now been done), these tests revealed that performance measurement is not as easy as it sounds. One of the slightly surprising discoveries was that the performance of the same test (no i/o involved) could differ by as much as 30% from one run to the next, a difference that in many cases exceeded the gains we were trying to make. Further analysis showed that these variations are in some way attributable to the operating system manages processes and totally out of our control. We are still trying to understand the finer detail of this discovery.
Another “feature” that we had overlooked in previous tests was that today’s processors change their clock speed automatically to balance between performance and power usage. Simply changing the computer’s power optimisation settings could change the test outcome considerably though this is likely to affect all products in the same way when doing comparisons.
For most applications, I suspect that actual performance is influenced more by program design than by the underlying database architecture. If performance is an issue, simply fixing poor design may resolve it to the extent that relative performance of different products is not an issue and choice of product can be based on price, functionality or quality of support services rather than performance comparison.
Martin Phillips
Ladybridge Systems Ltd
17b Coldstream Lane, Hardingstone, Northampton NN4 6DB, England
+44 (0)1604-709200
David,
HAD multiple MV systems.
I am no longer in the MV business world.Maybe one of the more qualified players will devise
some benchmark that reflects real life requirements including of course file management. Anecdotal testimony regarding installation, conversion, ease
of use, connectivity or support is also very useful. In the end people have to become aware that total cost versus
total benefits has to balance their budgets the same way they do when they buy
a car or a house and to be aware about all costs not just the license fee
which may be insignificant in rapport to total cost.
Hi,
I have been working with several MV flavours over the years
(mainly ADDS Mentor, early versions of D3/Win and UD) and since
the last 3 years now with D3/Linux. The biggest gripe I have with
D3 is indexing, which is just rubbish!
It sort of works with Basic (KEY) but sorting on indexed
attributes is just as slow as on non-indexed ones. And why do you
still have to index on A-correlatives and not on Dict items like
in U2?
So if you can get that to work that would be a big step forward.
Another thing that doesn't work properly is UTF-8 support. For
English speaking countries that obviously doesn't matter but for
us in Germany for instance it is a BIG problem since every UTF-8
character counts as 2. And since I work for an International
company with sites all over the world support of local languages
is paramount if we want to introduce our software at every site.
BTW the company I'm working for also plans to migrate from D3 to
jBase starting in China and we are all running on the latest
version of D3 already.
--
--
You received this message because you are subscribed to
the "Pick and MultiValue Databases" group.
To post, email to: mvd...@googlegroups.com
To unsubscribe, email to: mvdbms+unsubscribe@googlegroups.com
Addendum to above message. I demonstrated with a Pick input however it was my intention to use XAML and F# as the major input routines and simply pass the screen capture back to the Pick database. However it appears that someone has decided to make a complete mess of that by fooling around with ANSI which is not compatible with PICK. y acute is character 253 and y umlaut is character 255. It may well be that because these are not valid individual characters in Unicode it wont matter. But why do it in the first place?
--
While recognizing the risk of contributing to a thread which I have only skimmed, I must point out that this use of the term "ANSI" is imprecise at best. There are many ANSI standards. But when it comes to character sets, I think you (Peter) are referring to ISO character sets. There are many of these too, most notably ISO-8859-x. For instance, ISO-8859-1 "Latin-1", which is mostly the same as Windows codepage 1252, "Western". (Windows codepages usually have characters assigned to hex-80 through hex-FF, which ISO 8859 leaves unassigned.)
But I concur on your statements about UTF-8, including that it does not collide with values used as Pick delimiters.
On Sun, 26 Feb 2017, Peter McMurray wrote:
Hi KevinI am a great believer in "If it works don't fix it"
All Pick data is built around delimiters and they work fine.
I can see considerable massive issues introducing backslash escapes into the
terabytes of data that are stored using Pick delimiters.
Pick can store JSON now as every Pick item is a Name/Value pair. and in a
well-designed Pick Database every attribute is a Name/Value pair with the
Name being the A attribute and S being a synonym.
I have dived further into the D3 manual and can see where the manual writer
has become massively confused. They state
"NOTE—FlashBASIC does not currently support UNICODE, therefore the item is
converted to ANSI and back to UNICODE after the trigger completes."
ANSI is an 8 bit code that has nothing to do with Unicode they should have
just said that all data is passed as 8 bit character strings.
--
You received this message because you are subscribed to
the "Pick and MultiValue Databases" group.
To post, email to: mvd...@googlegroups.com
To unsubscribe, email to: mvdbms+unsubscribe@googlegroups.com
For more options, visit http://groups.google.com/group/mvdbms
Hi KevinI am a great believer in "If it works don't fix it"
All Pick data is built around delimiters and they work fine.
I can see considerable massive issues introducing backslash escapes into the terabytes of data that are stored using Pick delimiters.
Pick can store JSON now as every Pick item is a Name/Value pair. and in a well-designed Pick Database every attribute is a Name/Value pair with the Name being the A attribute and S being a synonym.
--
HI MeckiI am afraid that you are wrong. List works perfectly with UTF-8 characters from any plane so long as you do not limit the length of the display thus proving the storage is fine.
REC = ''
REC<1> = 'I can eat grass';* English
REC<2> = 'ὕαλον ϕαγεῖν δύναμαι· τοῦτο οὔ με βλάπτει';* Classic Greek
CRT REC<1>
CRT REC<2>
*
CRT LEN(REC<1>)
CRT LEN(REC<2>);* Fail
*
CRT REC<1>[1,5]
CRT REC<2>[1,5];* Fail
CRT FIELD(REC<2>,'λ',1);* Fail
CRT SEQ('λ');* Fail
:RUN BP TEST
I can eat grassὕαλον ϕαγεῖν δύναμαι· τοῦτο οὔ με βλάπτει1582I canὕαὕ206
If somebody is silly here then it's neither Kevin nor me.
All we tried to point out is that the D3 UTF-8 implementation is
seriously flawed.
BTW I am German and work in Germany - and yes, I work with D3 and
we use those unique German characters here in Germany every day.
And it is not unusual to have 5 or more 2 byte characters in a 30
character string.
When I refer to 'German' characters I mean characters that are
unique to the German alphabet (ÄÖÜäöüß) (even though those are
also used in other languages like Finnish and Turkish).
And those are all 2 bytes in UTF-8.
All other letters in the German alphabet are also in the English
alphabet and of course single byte ASCII.
Before we upgraded to UTF-8 we were using the German character
set in wIntegrate, where the letter ä for instance is stored as }.
So when we were told that D3 now supports UTF-8 we decided to
upgrade and convert all those substitute ASCII characters in our
database to UTF-8 instead migrating to another database.
Which wasn't without problems because one of those substitute
characters is /.
Only then to find out that we suddenly had all sort of problems
with our software not working properly any more.
Thank you for pointing out, that Backspace doesn't work in D3
with UTF-8 characters too - I hadn't even noticed up to now.
So when a user reports missing characters I now know what could
have caused it.
One more reason to ditch D3.
And how silly is this statement? "the List command works provided
you do not limit the number of bytes in an attribute."
When I use LIST or SORT I usually want to display data in columns.
And to do that I use dictionary items; and dictionary items
require a justification in attribute 9 and a length in attribute
10.
Our software uses LIST in procs to display data on the screen or
to print reports - and that doesn't work properly regardless if
you think it is silly or not.
Cheers - end of thread for me
" The Future and UTF-8The industry is converging on UTF-8 and Unicode for all internationalization. Microsoft NT is built on a base of Unicode