GreenSense: Need help with a bash and git issue

6 views
Skip to first unread message

John CC

unread,
Jul 28, 2019, 8:40:27 PM7/28/19
to SENSORICA, sensorica-ecg
I'm stuck on an issue with an issue related to bash and primarily a bunch of git calls, related to merging/graduating the dev branch into another.
I'm also completely burnt out and struggling to keep up the work of perfecting GreenSense.

I really need other techies to help me solve the remaining issues with the GreenSense system. It's 99% stable but I keep finding little things I need to solve.

If you have experience with bash and git, or at least with git, please put your hand up, maybe you can help.
I'll explain the issue, show you the script, and you might be able to point out a better solution.

Cheers,
John

Tiberius Brastaviceanu

unread,
Jul 28, 2019, 9:19:28 PM7/28/19
to Compulsive Coder, Bob Haugen, Ilian Hristov, SENSORICA, sensorica-ecg
I don't have that experience. Ilian might know and Bod. 

--
Go to SENSORICA home
https://sites.google.com/site/sensoricahome/home
Go to our Working Space
https://sites.google.com/site/sensoricahome/home/working-space
--
You received this message because you are subscribed to SENSORICA group.
To post to this group, send email to Sens...@googlegroups.com
To unsubscribe from this group, send email to
Sensorica+...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/Sensorica?hl=en?hl=en
---
You received this message because you are subscribed to the Google Groups "SENSORICA" group.
To unsubscribe from this group and stop receiving emails from it, send an email to Sensorica+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/Sensorica/CAK73SL1K_1UNZokEoGGwHu8mD%3D5in-kjOLeiP9DE0g4Fw_7dQQ%40mail.gmail.com.

John CC

unread,
Jul 28, 2019, 9:36:52 PM7/28/19
to Tiberius Brastaviceanu, Bob Haugen, Ilian Hristov, SENSORICA, sensorica-ecg
The GreenSense system is so close to solid but I'm struggling with the last remaining issues. And I'm perpetually burned out.

Hoping some other people can step in and help me figure this out. I need fresh eyes and fresh ideas to look for solutions I haven't yet found.

The current issue isn't really with the GreenSense system it's to do with the scripts involved in continuous integration, and graduating one branch to another after tests pass.
So people don't need to understand the entire system just a small part of the scripting to help out with this.

John CC

unread,
Jul 28, 2019, 9:50:03 PM7/28/19
to Tiberius Brastaviceanu, Bob Haugen, Ilian Hristov, SENSORICA, sensorica-ecg
If I can implement the infrastructure upgrades I want to implement that might help too. Should mean tests will run more quickly so every time I try a code/script tweak I'll get the result (pass or fail) more quickly. So I can use more trial and error more quickly to come to the solution.
As it is it takes a few seconds to try a code/script tweak then takes a couple of minutes for the test to run to tell me if it worked. But because I keep tweaking it and running tests over and over (currently getting fails, not on the GreenSense system components but in the CI graduate script) those minutes add up. And it's getting incredibly frustrating.

Kenneth O'Regan

unread,
Jul 28, 2019, 10:11:00 PM7/28/19
to John CC, Tiberius Brastaviceanu, Bob Haugen, Ilian Hristov, SENSORICA, sensorica-ecg
Hey,

Just spotting this pretty late where I am, but I'll have a chat with you about it and see if fresh eyes could help! Speak to you tomorrow and we might do a screenshare and chat through some of it.

Speak soon,
Kenneth

You received this message because you are subscribed to the Google Groups "sensorica-ecg" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sensorica-ec...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sensorica-ecg/CAK73SL1Ly-C6u9hrxtG%2BfSfdxmtFZfjoqve1KCfupgF0q%3DDMVQ%40mail.gmail.com.

John CC

unread,
Jul 29, 2019, 2:00:59 AM7/29/19
to Kenneth O'Regan, Tiberius Brastaviceanu, Bob Haugen, Ilian Hristov, SENSORICA, sensorica-ecg
Looks like I fixed this issue. Tests are passing. The live systems seem to be working.

Took some messy code/script hacks to make it all work which I don't like, because good code ideally shouldn't involve messy hacks. But I'll do what it takes to make it work.

Would still be good to have others review my code/scripts and help improve them.
The GreenSense system has so much going on that generally a system of this scale would take a team of engineers to build and maintain it. But I've been doing it mostly solo for 5 or more years since the first prototypes. That would be half the reason I'm perpetually burnt out. Yet I keep pushing myself to perfect everything, or at least get it close enough to perfect to roll it out. Hopefully I can keep going until it's good enough. It's so close.

Would also be good to be able to upgrade all the infrastructure so tests run faster meaning bug fixes don't take so long to implement. Any investments into GreenSense infrastructure are welcome.

Right now I think it's time to log off for the day and take a break. My brain is melting.

John CC

unread,
Aug 18, 2019, 7:48:05 PM8/18/19
to Kenneth O'Regan, Tiberius Brastaviceanu, Bob Haugen, Ilian Hristov, SENSORICA, sensorica-ecg
Where should I look for the code, scripts, and messy hack? I'm not
promising any help. I won't understand it myself. But might know
somebody who could help.

Thanks for the support Bob. Sorry for the slow reply. I've been swamped lately.

I was getting merge conflicts when calling the graduate.sh script:
Even though they should have been auto resolved by the -X theirs argument when calling git merge.
It was the buildnumber.txt file which the conflict was occurring on, because the buildnumber.txt contains a value which is incremented by the increment-version.sh script.

I think one of the issues is that Jenkins wasn't pulling the latest code when it was running tests. So I added the -X theirs argument but Jenkins was using an old version of the graduate script which didn't include it. Confusing the hell out of me because I was getting merge conflicts when I shouldn't have.

If you look at the Jenkinsfile it lists all the scripts that get called during the full build/test cycle run every time the code changes:

One of the messy hacks is that you'll notice "sh increment-version.sh" is called twice, yet the version is only incremented once. This is because before the "graduate.sh" script is called I reset the buildnumber.txt file to avoid the merge conflict.
Not only that it's called during the dev, master, and lts branch builds. When it should only be executing during the dev build.

With all of the other projects related to GreenSense the version is incremented once during the end of the "dev" branch build, then that incremented version is carried over to the master and lts branches during graduation.
The ArduinoPlugAndPlay project however I had to change the approach so the buildnumber.txt file is reset before graduating, and the increment-version.sh script then re-increments the version during the master build, and the lts build.

Calling the increment version script, resetting it before graduating, then calling it again is a messy hack I might have been able to avoid if I fixed the Jenkinsfile in the first place. The -X theirs argument on a git merge should have resolved that. But now I've implemented that and it works I'm a bit hesitant to touch it again unless it stops working.
I might try removing that messy hack once I've completed the infrastructure upgrades and everything else on my todo list, but for now it can stay there unless someone else wants to play around with it.

A couple of the quirks with the version incrementing is:
1) Once the "push-version.sh" script is called pushing the updated version back to GitHub the commit message includes "ci skip". Most of the stages include the "when { expression { !shouldSkipBuild() } }" condition which check for the "ci skip" commit message. This is to prevent the version increment/commit/push from triggering another build/test cycle. But the side effect is that if "push-version.sh" is called in an earlier stage all following stages would end up skipped. So that push needs to be at the end of the process (the last stage which has that when condition on it).
2) Once the buildnumber.txt file is incremented the full version needs to be injected into the scripts-installation/init.sh script so when the system is installed it installs the latest version. You can see all this happening in second last stage.
3) Not only does the "push-version.sh" script add a "ci skip" commit message to prevent another build from occurring, the "push-updated-version-in-script.sh" also does the same. So it needs to run right at the end of the Jenkinsfile too.

So another messy hack is that you can see a "sh test-category.sh OLS" call in the "push version" stage. OLS refers to one line setup or online setup. It's auto testing the single line install scripts. I don't like running it here it's messy, but it has to happen after the init.sh script has the new version injected into it and pushed back to GitHub. Therefore it has to happen in that same stage unless I change how things work somehow.

Having the version increment, then reset, then increment again is messy. It's not intuitive.
Also having tests run in the push version stage is messy. It looks out of place, but I can't run it before the init.sh script in the GitHub repository has the latest version injected into it.

If anyone looks at that Jenkinsfile it's a bit confusing. When really, the Jenkinsfile should kinda be self documenting. It should be easy to follow and understand what everything does just by reading the names of the scripts.

For now it works though. It would be good to get other people who have experience with bash, git, and Jenkins to review the code and see if they can find cleaner ways to achieve what I've done.

Cheers,
John

Tiberius Brastaviceanu

unread,
Aug 21, 2019, 12:31:39 AM8/21/19
to Compulsive Coder, Kenneth O'Regan, Bob Haugen, Ilian Hristov, SENSORICA, sensorica-ecg, Povilas Jurgaitis, Lai Wei-Hwa, Scott Frederick Laughlin, Tim Lloyd
Hi all, 

John walked me through the test system he built for the GreenSense project (we'll change the name of this project soon). 

Pov, Scott and Lai, I copy you here because you might find this useful for your work. 

Waw!

Monumental work when you think that John developed all that alone. 

I was hearing him about this test system but I could never appreciate it before he gave me the tour. 

Essentially, from what I understand, this is an automated software (+ hardware) testing system and Github management system. Since GreenSense uses Arduinos, John built an Arduino-based test bed, see attached pic (can work with other microcontroller-based hardware systems). Every sketch (software) that we run on these GreenSense Arduinos lives in a Github repo. If someone updates a sketch, and loads it up to the dev branch on Github, that triggers a server that he also set up, which is connected to this Arduino test bed, to run an automated test on the new code, which is executed directly on a real Ardiono in the test bed. The test monitors the hardware's behaviour and looks for irregularities. In case of malfunction the test triggers an error and points to the part of the code that likely failed. If the new code passes the test it is automatically pulled into the master branch (not so fast, but you get the idea...). 

Here is a list of tests written by John for every module of the GreenSense system.

I am not a programmer... I heard that automated tests and integration with Github are the norm for large tech companies. Perhaps hardware companies also build hardware test beds/rigs to automatically check the firmware that runs on these devices. But I find it impressive to find all that complexity built into an open source project like GreenSense. We didn't think of that when we were working on the Sensor Network project. 

Not just that, but the GreenSense system is also plug and play, and updates itself automatically when the repo gets updated. 

I will map the entire architecture with John in the following weeks. I want to surface his work, make it more accessible not only for Sensoricans but also for other open source projects out there. I sense that there is a lot of value in there that's screaming to get out into the world. 

If you are more knowledgable than I am in programming and can better appreciate John's work, please get inspired and share it around in other circles. I would also like to hear your opinion in this forum though :) 

Thanks again John for your time today.
download_20190821_000243.jpg

Lai Wei-Hwa

unread,
Aug 22, 2019, 10:28:04 AM8/22/19
to Tiberius Brastaviceanu, Compulsive Coder, Kenneth O'Regan, Bob Haugen, Ilian Hristov, SENSORICA, sensorica-ecg, Povilas Jurgaitis, Scott Frederick Laughlin, Tim Lloyd
Good stuff, John.

Essentially, from what I understand, this is an automated software (+ hardware) testing system and Github management system.

I think you're referring to Jenkins? Pov has recently brought CI/CD to our project to speed up workflow. Gitlab is the other big CI/CD these days and we have it running here at Robco. Just a prejudice of mine, but I prefer RAILS/GO apps over JAVA apps. I also prefer self-hosted Repos though they have their drawbacks (like leveraging a public repo platform's, like Github, community much more easily).

Thanks!
Lai


From: "Tiberius Brastaviceanu" <tiberius.br...@gmail.com>
To: "Compulsive Coder" <compuls...@gmail.com>
Cc: "Kenneth O'Regan" <kennet...@gmail.com>, "Bob Haugen" <bob.h...@gmail.com>, "Ilian Hristov" <ilian....@gmail.com>, "SENSORICA" <Sens...@googlegroups.com>, "sensorica-ecg" <sensor...@googlegroups.com>, "Povilas Jurgaitis" <pjurg...@robco.com>, "Lai Wei-Hwa" <wh...@robco.com>, "Scott Frederick Laughlin" <scott.la...@gmail.com>, "Tim Lloyd" <t...@trueinnovation.ca>
Sent: Wednesday, August 21, 2019 12:31:20 AM
Subject: {SENSORICA} GreenSense, IoT, impressive work

John CC

unread,
Aug 22, 2019, 11:38:07 PM8/22/19
to Lai Wei-Hwa, Tiberius Brastaviceanu, Kenneth O'Regan, Bob Haugen, Ilian Hristov, SENSORICA, sensorica-ecg, Povilas Jurgaitis, Scott Frederick Laughlin, Tim Lloyd
Tibi summed up the automated device testing pretty well. There's a fair bit of detail and other extra functionality like automatic incrementing of versions, etc. each time a build/test cycle runs.
There's also automated testing processes for the GreenSense Index (the git repository containing all device projects/repositories as submodules as well as all the scripts/apps to make the entire system work together as a whole) which do things like test a whole bunch of the scripts, including testing a complete installation process on multiple live garden systems and checking that all devices are detected and installed by plug and play. 
Hopefully we can document all this soon, including diagrams, so people can not only understand how it works but also learn from it and reuse what I've built.

I think you're referring to Jenkins?

Yes I use Jenkins CI server running in a docker container on an Odroid XU3 to detect changes and launch tests. Soon to be moved to an Odroid HC1 with an SSD so it runs faster.

Gitlab is the other big CI/CD these days and we have it running here at Robco. Just a prejudice of mine, but I prefer RAILS/GO apps over JAVA apps

I've heard of GitLab but I don't think I've ever used it. I might need to investigate it and see what advantages it has. 
Can you think of any advantages which I should look into?
Does the free tier provide sufficient functionality? Or do you need to go to the paid tiers to really gain the benefits of it?
I'm attempting to keep running costs near zero for now. But in the future as the project starts generating revenue we can look at investing in premium options.

I also use travis CI to run software only builds/tests (just as a way of double checking that builds and software tests pass), but to run hardware tests I need the hardware test rigs connected to the same computer which runs the build server. This is because the tests upload microcontroller sketches via USB before running tests.
So any CI solution I use will need to be able to run on my own server, and unfortunately can't be a hosted solution.
Jenkins isn't perfect but does handle this quite well.

GitLab seems to have a self hosted option so it may actually work with the hardware test rigs. I'll look for a GitLab docker container and try deploying it when I get a chance.

I tried installing "go" (ie. golang) inside a docker container so I could use GitHub's "Hub" program for publishing releases and couldn't get it to install and run properly even though it worked on my workstation.
Eventually I found an example of just using "curl" to post the release zips to GitHub without requiring any additional tool/program installations so I ran with that.

It might be possible to use an existing "go" docker container as the base though to run go based apps so I can just reuse that installation instead of having to install it. So if there's a "go" app worth testing out I can try that.

 I also prefer self-hosted Repos though they have their drawbacks (like leveraging a public repo platform's, like Github, community much more easily).

I do like knowing that even if I lose my entire workstation or my house burns down all my code is backed up to GitHub.
There would be advantages to self hosting the repos. I might look into doing both.
Soon I'll have an Odroid HC1 web server with a 1tb Evo Plus 860 SSD installed so there will be plenty of space, and plenty of speed, for hosting repositories as well as a bunch of other web apps/services.

One advantage of self hosting the repos would be that tests would run faster. Currently pulling code from GitHub each time code changes does take some time due to GitHub's network latency. Pulling code from my own network would be almost instant from an Odroid HC1 with SSD.
One downside is that while my broadband is fast (100mbps fibre optic) it's residential so it only has about 99% uptime, not the 99.99% uptime of commercial ISPs. Occasionally I do have downtime but not often. So I can't guarantee the same level of uptime as GitHub.
Doing both self hosting and pushing to GitHub could be a good way to get the advantages of both options. 100% uptime for end users but the speed of self hosted repos when running tests on my own network.


I'm always keen to look for ways to improve upon what I've done, so if you have ideas about how to do that I'm definitely open to exploring them and trying them out. Especially once the new faster test servers and web server are deployed (I've ordered most of it just waiting for it to arrive) because they'll be more powerful and have much more storage space than the existing systems.

For now I think I should investigate GitLab, see if I can get it up and running on one of my own servers, then see if I can get it to upload sketches and run tests, and weigh up the pros and cons.
Thanks for the suggestion.

Cheers,
John

Lai Wei-Hwa

unread,
Aug 23, 2019, 11:21:41 AM8/23/19
to Compulsive Coder, Tiberius Brastaviceanu, Kenneth O'Regan, Bob Haugen, Ilian Hristov, SENSORICA, sensorica-ecg, Povilas Jurgaitis, Scott Frederick Laughlin, Tim Lloyd
I've heard of GitLab but I don't think I've ever used it. I might need to investigate it and see what advantages it has. 
Can you think of any advantages which I should look into?
Does the free tier provide sufficient functionality? Or do you need to go to the paid tiers to really gain the benefits of it?
I'm attempting to keep running costs near zero for now. But in the future as the project starts generating revenue we can look at investing in premium options.

We're using self-hosted, which i really prefer. I believe the cloud version just limits the number of users for the free account.

I tried installing "go" (ie. golang) inside a docker container so I could use GitHub's "Hub" program for publishing releases and couldn't get it to install and run properly even though it worked on my workstation.
Eventually I found an example of just using "curl" to post the release zips to GitHub without requiring any additional tool/program installations so I ran with that.

I was referring to the language of the app itself (Jenkins vs Gitlab). I'm not a fan of JAVA (which Jenkins is written in).

Soon I'll have an Odroid HC1 web server with a 1tb Evo Plus 860 SSD installed so there will be plenty of space, and plenty of speed, for hosting repositories as well as a bunch of other web apps/services.

With only 2G RAM you're going to be struggling pretty quickly. And being limited to ARM distros is annoying. That's why I would always recommend a cheap tower or workstation. You can get a cheap one for under $300 and have much more ram (that you can upgrade), more SATA connections (where you can use SSD for the OS and programs but have your logs and storage on HHDs, and freedom to choose any distro.

Doing both self hosting and pushing to GitHub could be a good way to get the advantages of both options.

I'd do the same.





Thanks!
Lai


From: "Compulsive Coder" <compuls...@gmail.com>
To: "Lai Wei-Hwa" <wh...@robco.com>
Cc: "Tiberius Brastaviceanu" <tiberius.br...@gmail.com>, "Kenneth O'Regan" <kennet...@gmail.com>, "Bob Haugen" <bob.h...@gmail.com>, "Ilian Hristov" <ilian....@gmail.com>, "SENSORICA" <Sens...@googlegroups.com>, "sensorica-ecg" <sensor...@googlegroups.com>, "Povilas Jurgaitis" <pjurg...@robco.com>, "Scott Frederick Laughlin" <scott.la...@gmail.com>, "Tim Lloyd" <t...@trueinnovation.ca>
Sent: Thursday, August 22, 2019 11:37:54 PM
Subject: Re: {SENSORICA} GreenSense, IoT, impressive work

John CC

unread,
Sep 1, 2019, 1:02:06 AM9/1/19
to Lai Wei-Hwa, Tiberius Brastaviceanu, Kenneth O'Regan, Bob Haugen, Ilian Hristov, SENSORICA, sensorica-ecg, Povilas Jurgaitis, Scott Frederick Laughlin, Tim Lloyd
I was referring to the language of the app itself (Jenkins vs Gitlab). I'm not a fan of JAVA (which Jenkins is written in).

Yeah I've never got into java development but as long as it works then I'm happy with it. And Jenkins has been performing really well for what I want.
I do plan to try out GitLab though so if I see advantages with that I may switch over. 
 
With only 2G RAM you're going to be struggling pretty quickly. And being limited to ARM distros is annoying. That's why I would always recommend a cheap tower or workstation. You can get a cheap one for under $300 and have much more ram (that you can upgrade), more SATA connections (where you can use SSD for the OS and programs but have your logs and storage on HHDs, and freedom to choose any distro.

That is a valid point that 2GB ram SBCs do have a limits. I haven't yet reached that limit with any of the existing test servers (currently Odroid XU3 boards) but as I add more apps/services to the new Odroid HC1 boards (which also have 2GB ram) I may eventually reach that limit.
If I do reach a limit I can't deal with I will look into other options such as potentially using a normal PC/server. In that case I'll repurpose the HC1 boards for something else. There's plenty I can do with them.

I have a beefy desktop here which I'm considering setting up as a test server. 8 core 4.7ghz processor, 16gb ram, 250gig SSD (I might upgrade that at some point to a faster 1tb SSD), and a 2tb normal sata HD. Its power consumption is massive compared to that of these tiny SBCs, and because it's powerful, and runs quite warm, it has so many fans it roars when running at full capacity, and it's a giant vacuum cleaner sucking in dust which needs to be cleaned out regularly. So I need to sort out the dust before I turn it back on. I recently made a big dust collector which could help with that. Even sticking it in a cupboard with fans and filters on the outside could help with dust and noise.
Once I run some speed tests to compare I may decide it's worth it if tests run significantly faster, or if I find 2GB ram isn't enough.

One advantage of these little SBCs is I can easily run them off my solar batteries without needing an inverter and without draining the the batteries too quickly (soon to be set up once I get these infrastructure upgrades up and running). So I can pretty much guarantee uptime even during a blackout.
The solar setup will not only mean most of the infrastructure is running from solar power, cutting my power bill, but it can act like a UPS (once I implement some new solar/power related devices I'm created for the JuiceIoT project group to handle switching between mains and battery depending on battery level or whether there's a blackout).
I can even run my modem off the batteries and considering I'm on fibre broadband it should mean a blackout doesn't have any impact on the infrastructure.
Another advantage of the SBCs is they're silent. I like it when my lab doesn't sound like a buzzing/roaring data center, but the noise may be worth it if I see significant speed improvements in tests.
Yet another advantage of these SBCs is that I can fit maybe 30 or so of them into the same space my desktop. Considering space in my lab is limited that's quite useful. I'm stacking them into drawers in my network stack so I can have an entire server farm of SBCs taking up roughly 2 feet by 1 foot of floor space, which is nothing. My desktop has a larger footprint than this entire network stack (admittedly the network stack is much taller). By adding another stack on drawers on top of this existing stack I could probably fit 100 SBCs taking up that same tiny amount of floor space.

Being limited to ARM distros hasn't been an issue yet I'm actually getting fairly used to them. Actually lately I'm more used to them than normal OSes because I've used them for a few years now including on my Odroid XU3 workstation. There's not much I can't do on an arm board so I'm yet to find it a problem.
And considering the target for all this software is arm boards like RPis, using arm SBCs for most of the infrastructure can be a bonus. For test servers it means I can build arm based docker images on my build/test servers.

My Odroid XU3 boards have been using the official Ubuntu 16.04 OS provided by HardKernel (version 18.04 has too many instabilities installing certain software). 
I'm currently trying out DietPi for the Odroid HC1 boards due to its low resource consumption. If DietPi can't do what I want I'll try armbian stretch (debian based, which is faster and more stable than ubuntu based OSes).
I'm currently running armbian stretch on my new Odroid N2 workstation and it's very fast and so far is difficult to max out the 4gig of RAM considering armbian and the apps I use are fairly lightweight.

For my docker containers I tend to take inspiration from existing arm based images then create my own images which are arm compatible.
While DietPi looks appealing due to its low resource consumption (esp the RAM) I'm still setting it up and I won't know for a little while whether I'll run into any software issues.

You definitely have some good points there and I may at some point take your advice. But for now I just got 3 Odroid HC1 boards with SSDs so I'll see how far I can push them. They should be enough for what I want but yes in the future it might be worth using a desktop/server type PC. I just have to weigh up the pros and cons.

For $300 I could almost buy 2 Odroid HC1s (depending on what size SSD I get). So it may be that distributing over multiple HC1s can provide better value for money than a single desktop/server type PC. As well as providing redundancy by having multiple HC1 boards mirroring each other so if one goes down the others can take its place. Until I do some performance comparisons, or I run into limits with the 2GB RAM, I won't know whether it's worth it. But I'll get there at some point and find out.

Thanks for the suggestions I'll post to the group if I run into any issues or limitations using these arm SBCs.

Hopefully I'll also be able to post about the benefits of GitLab over Jenkins at some point once I test it out.

Cheers,
John

John CC

unread,
Sep 1, 2019, 1:04:53 AM9/1/19
to Lai Wei-Hwa, Tiberius Brastaviceanu, Kenneth O'Regan, Bob Haugen, Ilian Hristov, SENSORICA, sensorica-ecg, Povilas Jurgaitis, Scott Frederick Laughlin, Tim Lloyd
Doing both self hosting and pushing to GitHub could be a good way to get the advantages of both options.

I'd do the same.

I'd be curious to know how you achieve this. I'm sure I'll find a way but if there's an existing elegant way to do it then it might save me some time figuring it out.

I do have a "commit-and-push.sh" script in every project which I use to do git commits (just simplifies that step) so I could probably tweak it to push to both GitHub and to a local git repo archive.
Then I'll need to figure out how to have the scripts pull from GitHub when installed by someone else, but pull from my local repo archive when running within my network. I'm guessing that's doable just need to figure out the best solution.

Lai Wei-Hwa

unread,
Sep 3, 2019, 10:54:27 AM9/3/19
to Compulsive Coder, Tiberius Brastaviceanu, Kenneth O'Regan, Bob Haugen, Ilian Hristov, SENSORICA, sensorica-ecg, Povilas Jurgaitis, Scott Frederick Laughlin, Tim Lloyd
To sync to two repos, simply add a second remote and push there as well.

I see your logic on going for the small boards. If space and heat are real concerns, your strategy is good. I'll be interested to see if you're able to run everything that ends up in your stack on those things. Keep us informed!

Thanks!
Lai


From: "Compulsive Coder" <compuls...@gmail.com>
To: "Lai Wei-Hwa" <wh...@robco.com>
Cc: "Tiberius Brastaviceanu" <tiberius.br...@gmail.com>, "Kenneth O'Regan" <kennet...@gmail.com>, "Bob Haugen" <bob.h...@gmail.com>, "Ilian Hristov" <ilian....@gmail.com>, "SENSORICA" <Sens...@googlegroups.com>, "sensorica-ecg" <sensor...@googlegroups.com>, "Povilas Jurgaitis" <pjurg...@robco.com>, "Scott Frederick Laughlin" <scott.la...@gmail.com>, "Tim Lloyd" <t...@trueinnovation.ca>
Sent: Sunday, September 1, 2019 1:04:40 AM

John CC

unread,
Sep 3, 2019, 6:00:08 PM9/3/19
to Lai Wei-Hwa, Tiberius Brastaviceanu, Kenneth O'Regan, Bob Haugen, Ilian Hristov, SENSORICA, sensorica-ecg, Povilas Jurgaitis, Scott Frederick Laughlin, Tim Lloyd
To sync to two repos, simply add a second remote and push there as well.

Yeah this seems like the go to approach.
Here's the commit-and-push.sh helper script I use to speed up the process of committing and pushing to origin (ie. GitHub)

I could modify that so it pushes to "origin" and to another remote called "internal" or something like that.
Then a single call to that script pushes to both keeping them in sync.

I just need to find a good git repo web service to launch on my web server to do internal git repo hosting.

Then need to figure out how to have the install scripts choose between pulling from GitHub (for end users) or from the internal repos (for when running tests, etc.) but I think that should be doable.

I see your logic on going for the small boards. If space and heat are real concerns, your strategy is good. I'll be interested to see if you're able to run everything that ends up in your stack on those things. Keep us informed!

I'll try it out, see how it goes, and I'll hopefully be able to keep the group posted on the results.

Cheers,
John 

Reply all
Reply to author
Forward
0 new messages