MV and Azure DevOps Pipelines - suggestions requested

158 views
Skip to first unread message

Jeff Teter

unread,
Oct 26, 2022, 11:07:22 AM10/26/22
to Pick and MultiValue Databases
Our company has decided to use Azure DevOps and Git for project and source management. Deployments will take place using ADO pipelines. Our local ADO admin says that artifacts created when pushing a change from the 'develop' branch to the 'release' branch will contain all source code items each time. In our case, that includes somewhere over 3,500 programs and 25,000 dictionaries.

Our developers create branch names based on the PBI number on which they are working and a brief alpha tag for identification. For instance, '13531_CUST' would be for PBI 13531 and having something do to with customer data. As work is done, the developers also create lists of items changed using a name with the branch name and the source file. So, '13531_CUST.BLBP' would be for the '13531_CUST' branch and the 'BLBP' host library. These lists are part of the deployment package and, if the project is updated, the corresponding list would be updated as well.

The ADO admin insists that the only way to use the pipeline is to re-compile all programs and re-install all dictionaries. This seems to be extremely inefficient. Instead, I have requested that the PBI number be written to a file called 'DEPLOY' (or similar) with a date/timestamp appended on the target server. The PBI is a part of the Pull Request data and should be available to the pipeline.

From the UniVerse side, we can then pick up that file and, based on the PBI number we can select the particular lists that have been included. Using those, we can target the programs to be compiled and/or the dictionaries or other assets that need to be deployed.

I'm hoping that someone in the community has already dealt with this issue and might have some recommendations for moving forward. 

krnntp

unread,
Oct 26, 2022, 2:01:24 PM10/26/22
to mvd...@googlegroups.com
Years ago, I had a fairly fun and creative job supporting a university alumni department... I wrote and fine tuned a lot of custom fundraising reports and data exports, using every imaginable feature in UniData, from the query level to UniBasic and out to Unix. The database product  I worked with was called Benefactor, first offered in 1986, and it had a companion product called Colleague which was used for payroll and student affairs. 

It turns out Colleague is still alive and well, but Benefactor must have disappeared sometime after 2006? 

It seems that some of the most active UniData users out there today, are academic sites running Elucian's Colleague product.

Is there anyone on this list using  Colleague? Does anyone remember Benefactor, or know why / when it disappeared? A 20+ year lifespan, for a software product, is nothing to be ashamed of; but still. I just found out about this, and can't help but feel a little sad.

Best regards, 
K. Warwick Russell

--
You received this message because you are subscribed to
the "Pick and MultiValue Databases" group.
To post, email to: mvd...@googlegroups.com
To unsubscribe, email to: mvdbms+un...@googlegroups.com
For more options, visit http://groups.google.com/group/mvdbms
---
You received this message because you are subscribed to the Google Groups "Pick and MultiValue Databases" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mvdbms+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mvdbms/1192a57c-c949-4859-bb82-c5a5b5568e98n%40googlegroups.com.

Jonathan Wells

unread,
Oct 26, 2022, 9:14:46 PM10/26/22
to 'Jay LaBonte' via Pick and MultiValue Databases
I've retired from it all, but I did work with Colleague and Benefactor at two different schools.  I was the DBA for Beloit College and the "Administrative Computing Coordinator" (which meant I was a cross between the DBA and an assistant dirrector) at Campbell University.  I was very involved with the conversion from Benefactor to the new module in Colleague (that I don't remember the name of right now) at both schools.  

Benefactor needed a lot of updating to keep up with the demands of development and alumni relations.  At some point, Datatel decided to completely replace Benefactor.  The new design was created in a way to work better with SQL.  I was not very impressed with the new design.  We tried to go with Ellucian's new SAP-based reporting solution.  We had a very experienced consultant from Ellucian trying to get it to work, however, it just never quite worked.

Cheers,
Jonathan Wells

krnntp

unread,
Oct 27, 2022, 9:11:11 AM10/27/22
to mvd...@googlegroups.com
Jonathan, interesting. Looking on the web, I saw a couple mentions of a Colleague Advancement module; I guess that must be the new solution they went with? My school, Oberlin, was a Benefactor-only site, where the Development and Alumni Affairs building had chosen Benefactor to replace its own, even older database, while the rest of the campus used Banner. We had been using it so long that we had already come up with satisfactory ways of storing even the most arcane data, so that when Datatel introduced a new way of recording bequest intentions, well, we already had an EZ Screen and our own data file for that. 

I can't imagine SQL doing justice to all the gray areas and interrelated data :-)

Best,
K. Warwick Russell

On 10/26/2022 21:14, Jonathan wrote: 
I've retired from it all, but I did work with Colleague and Benefactor at two different schools.  I was the DBA for Beloit College and the "Administrative Computing Coordinator" (which meant I was a cross between the` DBA and an assistant dirrector) at Campbell University.  I was very involved with the conversion from Benefactor to the new module in Colleague (that I don't remember the name of right now) at both schools.  

Benefactor needed a lot of updating to keep up with the demands of development and alumni relations.  At some point, Datatel decided to completely replace Benefactor.  The new design was created in a way to work better with SQL.  I was not very impressed with the new design.  We tried to go with Ellucian's new SAP-based reporting solution.  We had a very experienced consultant from Ellucian trying to get it to work, however, it just never quite worked.

Cheers,
Jonathan Wells

Jim Idle

unread,
Oct 31, 2022, 6:20:06 AM10/31/22
to mvd...@googlegroups.com
Firstly, I would advise that you use git flow methodology and not roll your own - that’s when people start to hate on git because they didn’t really grok it before creating branches etc.

There is no way that a merge from develop in to master/main should cause an update of every source file. I would say that is incorrect to the extreme. However, if you have dictionary items that need to be updated, then you should store them as individual records and have a script that installs them when they change.

It sounds like you guys are coming from different angles and are not on the same page yet…

You are not trying to store the actual hash files in git are you? That may well be your misconception here. You need your source code and dictionary items to be individual files on the native file system.

Jim

--

Martyn

unread,
Oct 31, 2022, 6:46:09 AM10/31/22
to Pick and MultiValue Databases
Hi Jeff,

I am not sure how relevant this is, but we have Git integrated into OpenInsight 10 now, and I have been using it with a couple of my colleagues for a project that I have been working on.  It is very simple to use and enables us all to stay in sync around the world.  However, we have to share our dictionary changes manually because those do not (I believe) transfer, maybe you would have to do the same.  Thankfully, I keep all of my data files in a separate volume, so they are easy to zip and pass around.  I guess that we could automate this through some sort of program, but that's overkill for our needs.  We then use OpenInsight's built in RDK for deploying upgrades to deployed user's systems, so the Git side is only used for development changes to the base and any inherited applications.

I hope that you find a similarly easy and robust solution for your needs.

M.

Wol

unread,
Oct 31, 2022, 7:31:47 AM10/31/22
to mvd...@googlegroups.com
On 31/10/2022 10:20, Jim Idle wrote:
> There is no way that a merge from develop in to master/main should cause
> an update of every source file. I would say that is incorrect to the
> extreme. However, if you have dictionary items that need to be updated,
> then you should store them as individual records and have a script that
> installs them when they change.

As per my other post ...

I'm trying to write a simple data dictionary manager, which is intended
to store the dictionary in text files, so you could put those in git and
then have a make file that will rebuild and recompile any dictionaries
that have changed.

You could also make a make file to intelligently recompile any source
code, intelligently cataloging programs will be a little more tricky :-)

Cheers,
Wol

Rex Gozar

unread,
Oct 31, 2022, 9:04:05 AM10/31/22
to mvd...@googlegroups.com
We're running Universe here and have been using source code control along with a "build" system and a "patch" system since 2004.

The "build" system gets all items from source code control, including file sizing data to create files, Q-pointer items for the VOC file, binary PNG images, dictionary items, and programs, and builds the Universe account completely from scratch. This creates our installable product.

The "patch" system determines all source code items that have changed between releases and creates a patch file (similar to your DEPLOY idea). The patch file only contains the source items that have changed. The patch file is deployed to our FTP server (as part of the DevOps process) where it can be SRCDOWNLOAD'ed and SRCINSTALL'ed (part of our SRCTOOLS toolkit) on demand on any of our client's servers or developer accounts. Our SRCINSTALL can create files, load dictionaries, compile programs, deploy binary images and executables, delete old software, as well as detect version mismatches.

It's important to note that "building" a product and "patching" an existing system are two different things that use the same source code, but are really different and need two different approaches.

Developers have their own account where they develop and test their changes. They use our SRCDIFF to compare their account to the base product and generate a patch that can be applied back to the main trunk (you never want to manually keep track of what's changed). For programs, we also use the free version of DiffMerge (from SourceGear) to compare code.

rex

--
You received this message because you are subscribed to
the "Pick and MultiValue Databases" group.
To post, email to: mvd...@googlegroups.com
To unsubscribe, email to: mvdbms+un...@googlegroups.com
For more options, visit http://groups.google.com/group/mvdbms
---
You received this message because you are subscribed to the Google Groups "Pick and MultiValue Databases" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mvdbms+un...@googlegroups.com.

Tony Gravagno

unread,
Oct 31, 2022, 5:35:06 PM10/31/22
to Pick and MultiValue Databases
Hey Rex - I'm working with a team using Git for non-MV, but all of the MV side is manually managed. I just started to spec out a new system for the MV side. There's no doubt that it will look extremely close to what you have. If you're interested in FOSSing the solution there, please ping me. I'd also be up for a private arrangement for fixes and enhancements.

If anyone else has something like this that can be shared, please let me know. Otherwise .. it's off to re-invent the wheel again we go...

Regards,
T

Dick Thiot

unread,
Oct 31, 2022, 9:47:48 PM10/31/22
to mvd...@googlegroups.com
This is the kind of thing that the MultiValue community needs in OpenSource!  Please consider opening it up.

Dick

--
You received this message because you are subscribed to
the "Pick and MultiValue Databases" group.
To post, email to: mvd...@googlegroups.com
To unsubscribe, email to: mvdbms+un...@googlegroups.com
For more options, visit http://groups.google.com/group/mvdbms
---
You received this message because you are subscribed to the Google Groups "Pick and MultiValue Databases" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mvdbms+un...@googlegroups.com.

Jim Idle

unread,
Nov 1, 2022, 2:40:45 AM11/1/22
to mvd...@googlegroups.com
As somewhat an aside, jBASE now allows you stop store an excepted version of the source code with your binaries. It means that you a) Know which version of source code was used to create a running system (we found that on almost all ports, people had multiple versions of programs and no longer knew which was the correct one), and b) you can ship your binaries with encrypted source to the customer and if you need to debug a source code on site, the debugger will let you decrypt it. It’s very useful.

I also think that an open source devops tooling could be very useful to let people in the MV world adopt the now defect standards for source control and CI/CD. It should be written with a generic piece and driver scripts for a particular target system. Though even compiling and cataloging all your programs should not take long these days.

Jim

Wol

unread,
Nov 1, 2022, 10:38:03 AM11/1/22
to mvd...@googlegroups.com
On 31/10/2022 10:20, Jim Idle wrote:
> There is no way that a merge from develop in to master/main should cause
> an update of every source file. I would say that is incorrect to the
> extreme. However, if you have dictionary items that need to be updated,
> then you should store them as individual records and have a script that
> installs them when they change.

Wol

unread,
Nov 1, 2022, 10:38:27 AM11/1/22
to mvd...@googlegroups.com
On 31/10/2022 10:20, Jim Idle wrote:
> There is no way that a merge from develop in to master/main should cause
> an update of every source file. I would say that is incorrect to the
> extreme. However, if you have dictionary items that need to be updated,
> then you should store them as individual records and have a script that
> installs them when they change.

Joe G

unread,
Nov 3, 2022, 2:49:04 PM11/3/22
to Pick and MultiValue Databases
Universe lets you store dictionaries as files in a normal folder using type 19 files.  You have to make sure your item names aren't difficult or illegal in whichever OS you're using but other than that they should work the same has having them in a hashed file. There may be a small performance penalty but with the speed of today's hardware I doubt it would make enough of a difference to matter.  Has anyone tried using type 19 files in production?  I did some testing at one point and it worked but I haven't actually converted any of our production files to use them.
Reply all
Reply to author
Forward
0 new messages