Download All Programs At Once

0 views
Skip to first unread message
Message has been deleted

Josephine Heathershaw

unread,
Jul 14, 2024, 9:22:14 AM7/14/24
to burathewsie

If you read anything about how the Windows installer system works, it's obvious they applied some ideas from transactional databases to program installation and maintenance, not to mention the .msi files themselves are a database.

download all programs at once


Descargar archivo https://lpoms.com/2yPFv2



There is always the question in designing any database - do you want speed or accuracy/safety? Given that installers can modify system configuration and that a mishap could render the system inoperable, safety has been given a priority over speed. One of the reasons why .msi installers are so slow is because rollback files are made for each file, etc. that will be modified, and then deleted afterwards - allowing any changes to be "rolled back" if something goes wrong in the middle of things (such as a power outage or system crash).

Now, I believe the MSI engine itself enforces installing, modifying, or removing only one program at a time - if you try to run an .msi while another is uninstalling, for example, it either won't run or will wait for the currently running uninstall to finish. Non-MSI installers may not behave this way - since they don't use the MSI engine. But because of this safety design decision, this is probably why appwiz.cpl insists on only letting one uninstaller be called at once.

Most uninstallers track what they are changing so they can roll back successfully if there's a failure. If one isn't aware of all the changes being made (by other uninstallers) then it may actually make things WORSE if it tries to roll back a failed install.

Uninstallation tasks frequently modify files that are shared by multiple programs, or system files\the Registry (a partial reason for needing administrative power to do it). If multiple uninstall tasks ran at the same time, they could conflict. If you have ever had a run in with "DLL Hell", it would be the same. Other programs or Windows itself could be left in an inconsistent state.

Uninstalling programs simultaneously, besides having the potential problems other mentioned, have very little benefit: it won't be much faster than uninstalling the programs sequentially. Unintalling a program is a task involving disk IO. Running several programs that do IO isn't faster than running them sequentially (unless the programs are installed on two separate physical disks). In fact, it's likely to be slower because the two competing IO tasks will make the disk cache less efficient and the disk's physical heads will have to jump from place to place.

I am building a Java Google App Engine server project and a Java desktop client. I would like to run them both at once, but I'm not sure if this is possible using Eclipse/GAE plugin. Is there some way? I'd like to be able to step through them both at the same time.

Right im looking for a script that I can click on after I have logged in to open various programs just to save me a bit of time. I have managed to get a script to open one but as a bit of a newbie can someone provide advice.

Every time I restart my SAS Enterprise Guide (SAS EG), all my programs are shut down so I have to reopen them which is quite annoying when you work on many programs.

In the picture below: I have to reopen "Program", "Program 1" and "Program 2" every time I restart SAS EG.

When EG shuts down it closes all programs that are left open. If you are using EG 8, then all of these programs will be listed on the Start Page and you can just double click on them to reopen. It would be problematic to expect EG to open all programs you had open in your last session - what happens if you have deleted or moved them since then?

I too wish for the programs to be opened. I find it frustrating that the order of the programs that do open are changed. Having just migrated from SAS EG 7.15 to SAS EG 8.3 is surprisingly difficult. The programs that were listed in the old process flow no longer appear. Is there a way for SAS EG 8.3 to mimic this old behaviour?

An alternative approach is just to leave EG open from one Windows session to the next as long as you use "Windows Sleep" rather than signing out. That's what I do and then just do a server connect to reestablish a SAS session. That way your programs stay open from one session to the next.

It sounds like you want to fully automate the process but keep the code separated, possibly for individual logs/troubleshooting/future changes? Regardless, I think what I would do is create a batch file and add it to the Windows task scheduler.

If you are running your programs on a SAS server you may want to consider using the batch and scheduling SAS jobs capabilities of SAS Management Console. It takes care of constructing correct SAS batch command lines without you having to construct them manually.

I have another use case for this question. I have a set of SAS programs that do share any dependencies that I wish to run in parallel. At the end of the set, I wish to trap the return code from each job and fail the parent process if any of the child processes report an error.

I came up with the attached script, but it does not work in the way I need. Per the man page for wait, once the child process terminates, the wait statement refers to a terminated process and Unix (Solaris in my case) returns a zero. If I order the scripts in order the the amount of execution time required, then it works but I cannot predict precisely the completion order for all the steps in the job.

In this example, the script runs 5 SAS programs. Each one just sleeps for a set number of seconds and then aborts with a return code equal to the number of seconds the program sleeps. The number of seconds is in the name of the program.

We have launched an introductory nurture stream, and our plan is to exclude contacts that are part of this nurture program from any of the other emails we send out until they've run through the entire stream. We plan to exclude those individuals by adding a constraint in the overall marketable contacts smart list that says they are not a member of this specific engagement program. Once the person is exhausted from the nurture stream, we'd like for them to be added into the larger database and receive all other communications.

With that being said, are contacts who are exhausted from nurture streams removed from the program? I assume that they are not because of 1) tracking 2) the possibility to add more content into the stream at a later time. With that being said, is there a suggestion we could use that would place these contacts into the marketable list of people once they've been exhausted?

You are correct in that members are not removed from engagement programs when they exhaust content. In your smart list, you can add a constraint "Exhausted Content" to the Member of Engagement Program filter. This will exclude/include engagement program members based on your selection.

I recommend you make use of Segmentations. For one thing, I would create Marketable Segmentation with 2 segments: True and False. In "True", you would use put in filters for Unsubscribed is False, Black Listed is False, Marketing Suspended is False, etc... any filters that would cause you never to email someone if any of those field values wasn't as specified. And then do the opposite for the "False" segment.

Thanks for the suggestion. I played around this morning, see attached image. Using ALL filters, it brings the list down to only a few thousand so I know that's incorrect. If I use ANY filter is brings the list to a more accurate number, but I am still worried that it is taking people from the marketing people segment OR individuals who have exhausted content from an engagement program NOT both. I am assuming incorrectly?

The screenshot you included above restricts your list only to people are have exhausted content in the engagement program AND are in the marketable is True segment. So it leaves out all the people who aren't in the engagement program at all. You need to add another filter (and fill in the name of the engagement program:

I added the incorrect screen shot. I had switched it so that it was using ANY criteria. We have a few engagement programs going on at once, so I was trying to be broad enough that it caught everyone in all the programs. When you mentioned adding filter logic above - what is the third piece of criteria that I am missing?

For filter 3, click the plus symbol and add all of your engagement programs - OR - and this would be better because it's scalable - you wouldn't have to change it when you add more engagement programs: if you have a naming convention, establish one so that all of your engagement program names have the same beginning, like "Nurture:", than where it says "is" in filter 3, change it to "starts with" and then put whatever your standard engagement program names begin with, e.g.:

(Of course if Alice is worried about Bob innocently fat-fingering his password, she can send him a few different copies of the same program. Put differently: one-time programs trivially imply N-time programs.)

The rest of our paper looks at ways to reduce the number of lockboxes required, mainly focusing on the asymptotic case. Our second proposal reduces the number of lockboxes to about O(1) lockboxes for every input bit (when there are many input wires) and the final proposal removes the dependence on the number of input bits entirely. This means in principle we can run programs of arbitrary input length using some reasonable fixed number of lockboxes.

Maybe. There are many applications of obfuscated programs (including one-time programs) that are extremely bad for the world. One of those applications is the ability to build extremely gnarly ransomware and malware. This is presumably one of the reasons that systems like TrustZone and Intel SGX require developers to possess a certificate in order to author code that can run in those environments.

* There is a second line of work that uses very powerful cryptographic assumptions and blockchains to build such programs. I am very enthusiastic about this work [and my co-authors and I have also written about this], but this post is going to ignore those ideas and stick with the hardware approach.

d3342ee215
Reply all
Reply to author
Forward
0 new messages