I was looking for a function in python which can shrink the labels. It is kind of like erosion but of each labelled region. Lets say I have two touching labels, 1 and 2 of two circles. Now I want to shrink both labels/erode pixels so that they do not touch each other by d number of pixels.
I am not quite sure I understand you correctly.
Do you want to shrink each label by n pixels ? Or do you want to shrink until touching labels are separated by at least n pixels and leave non-touching labels untouched?
In the latter case you could find gradients in the label image. Basically every pixel where the label value changes. Then dilate this line by n pixels and use it to mask out the labels.
I was just reading your code a bit. Labels can disappear yes but if I am eroding only the border pixels they can not be split into two right? I need some time to read and understand your code and then I will be happy to do a PR.
@VolkerH
What I wanted is the former, shrink each label by n pixels, I thought of the brute force way too, just was thinking if I can do something along the lines of expand_labels implementation or if someone has some function like that one.
However, it does not take care of the scenarios described by @haesleinhuepf (disappearing objects and cutting label in two. In the second case, the separated masks will both be relabelled with the original label).
I did but did not post the file back since it is so easy to do according to what I described. But apparently this is not the case? Attached is the result of it joined without naked edges at the surface seams
trimshrink_fixed.3dm (93.7 KB)
I'm getting charged per GB for SQL Server storage, so I have an incentive to use just as much storage as I need and no more. There are enough GB involved to create an attractive benefit to using less storage. I immediately think, "shrink the database." Then I think, "uh oh."
Nope, no progress. I got all tangled up in COVID-19 support for our agency, and am only fully back to my day job now. Actually, this item dropped through the cracks, and I need to get back around to it. I think I have some kind of trashy databases lying around that I can test on without concern. Thanks for the reminder . I'll post my results here when I have some.
Generally speaking, doing this with an Enterprise database is a huge no-no. While the process used does accomplish what you are looking to achieve, you're going to cause performance issues with larger and more heavily used databases by doing this.
@John_Spence - Are there any other ways around this problem that you may know of? Our database is similar to @TimMinter 's in size and we are up against the same problem. I'm having other database maintenance tasks issues come up that are a result of the size issue. I'm tempted to go ahead with a shrink since it worked for Tim. Tim, did you end up ever seeing any problems arise from the shrink process? And did you end up rebuilding the index? Thanks!
If you really really have no other option, you can shrink by releasing unused space after you compress the database (via catalog DB Admin tools). At that point, you have done all you can short of doing the unthinkable.
Even worse, the PostGreSQL wall for this subscription ended up to represent 4TiB in a few days, and we had to stop it. Back to normal, the extra data space was released, but the reserved space however, still lives there. Paying 7TiB instead of 3TiB is really expensive, and as you can guess, does no add any value to our business.
Unfortunately, directly shrinking a Cloud SQL database disk is not possible. While storage size can be increased, decreasing it is challenging due to the inherent limitations of the underlying storage system. However, there are several approaches you can take to optimize your data and migrate to a more efficient storage size:
Modified Use of Database Migration Service (DMS): Although DMS faced challenges with the PostgreSQL wall in your initial attempt, a staged migration approach might be more effective. This involves using DMS to gradually migrate specific table data in batches to a new Cloud SQL instance with a smaller disk size. However, given the frequent schema changes and high data modification rate in your database, the success of this approach would depend heavily on these specific dynamics.
Hot Backup & Recovery with Additional Optimization: After performing a hot backup and restoring it to a new instance, consider using the VACUUM FULL command in PostgreSQL. This command can help reclaim unused space by defragmenting the database. Be aware that VACUUM FULL can be time-consuming and requires significant downtime, as it locks the tables during the process.
Exploring Alternative Backup Tools: Tools like pg_basebackup or third-party solutions might offer faster backup and restore capabilities compared to pg_dump. While these tools can potentially reduce the downtime, the overall time required will still largely depend on the database size and network bandwidth. Additionally, these methods may not directly address the disk size reduction.
Cold Defragmentation Approach: Cold defragmentation using external tools like pg_repack involves exporting the data, defragmenting it offline, and then uploading it to a new Cloud SQL instance with a smaller disk size. This process is complex and requires a deep understanding of PostgreSQL. It's effectiveness in reducing disk size also varies based on the database's specific characteristics.
Creating a New Instance with Desired Disk Size: One effective way to reduce disk size is to create a new Cloud SQL instance with the desired smaller disk size and migrate your data to this new instance. This method involves backing up your data and restoring it to the new instance, which can help in achieving the disk size reduction you're aiming for.
Buffalo Shrink Wrap Has Specialized in Industrial Shrink Wrap Protection for Over 30 Years! The proven, easy to learn step by step shrink wrap installation process can be used to protect any size, shape or type of Industrial Machinery during Shipping and Storage.
Buffalo Shrink Wrap is a cost-effective way to protect Modular Homes during Shipping and Storage. The drum tight waterproof shrink wrap cover is commonly used to Protect the entire Modular Home or parts of it, sides, ends or roof.
In that example I used one main, large image. I created a copy of the image and scaled it down to a smaller size. I used some entrance and exit animations (fade in/out) on both images and arranged them to appear in the timeline so they "flow" through the animations and appear as though the image is shrinking.
thanks guys. Brett, I tried that one and it would work well but my image is inset from the bottom corner so when it shrinks away, it makes this weird little jog to the edge of the slide and pulls it out of the area it should stay in.
I am trying to connect my laptop to a TV. I can't shrink the screen size using Intel Graphics Command Center as far as I have seen. I know I was able to do exactly that using the old Intel Graphics Control Panel. Is there any way to do this in Command Center, or is there another app (I don't care from who) that can shrink a display to match the corners correctly?
Your second suggestion also cannot be used as intel graphics control panel is no longer useable with the new intel graphics drivers. I know that screen shrinking is possible because of that program had a feature for it, but when I updated my computer new drivers were installed and I can't use the old program anymore.
I'm sorry that you can't execute my advice on your computer. I checked this on NUC7i7DNHE and both methods works. I've on this computer installed the new DCH driver, Intel Graphics Command Center and Inte Graphics Control Panel. Please see the attached images.
I've tried all possible resolutions, and there is always some kind of clipping involved. I've also tried the best looking resolutions while also resizing different things on the screen (text, apps, etc.). This also does not work.
Intel does not verify all solutions, including but not limited to any file transfers that may appear in this community. Accordingly, Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
when i do a cluster health nothing get reallocated so I'm confused as to why the shards are not getting shifted around for the shrink api to work correctly. I have a 3 node cluster were two of them are master and the other one is a data node, 5 shards and 1 replica is what the current shards are set to and I'm trying to take it down to 1 primary and 1 replica.
The ack you receive from this step indicates that the cluster has accepted the settings update, not that it has finished relocating all the shards onto the one node. You need to wait for that to happen too, perhaps using the wait_for_no_relocating_shards option of the cluster health API.
I'm a bit confused here, after executing the first step i execute cluster health and no shards are getting reallocated at all so i dont think its moving them around for the second command to work properly at all.
in order to verify nothing is getting moved around i also went to the monitoring page in kibana and look at the shard legend page for that particular index and I still see all shards on the same nodes as they were before the execution of the command.
It looks like you are trying to move it to a node with IP address 172.16.99.212 and name lxc-elastic-01, but this does not match any of your nodes. The only node with that IP address has name 2sxScBp and not lxc-elastic-01:
Would there be an issue if i shrink a volume with existing data on it? Take for example I have a volume called A sized at 2TB which contains 1TB of data. Can I shrink the volume to 1.5TB without issues?
7fc3f7cf58