Comprehensive Master Plan Envisions Over 2,800 Units of Housing, Including Affordable Housing, and Fulfills Other Local Needs and Priorities Such as Community Facilities, Open Space, and Much-Needed Amenities
Governor Kathy Hochul today unveiled the Creedmoor Community Master Plan, a community-driven framework for redeveloping underutilized land at the 125-acre Creedmoor Psychiatric Center campus in Eastern Queens. The plan seeks to transform approximately 58 acres of the State-owned Creedmoor campus from surface parking lots, overgrowth, and vacant buildings into a vibrant new community with homes, recreational spaces, greenery, and neighborhood retail. It is the result of a six-month collaborative planning process led by Empire State Development and the Queens Borough President's Office, facilitated with help from the Metropolitan Urban Design Workshop. The plan, which was released today at a meeting with community stakeholders, is available here.
Governor Hochul announced a package of executive actions earlier this year to promote housing growth as part of an ongoing commitment to increasing the housing supply and addressing New York's housing crisis. As part of that package, the Governor directed state agencies to review lands in their ownership and control and determine whether those sites can be used for housing.
Yesterday, Governor Hochul unveiled a proposal to transform the former Lincoln Correctional Facility in New York City into a vibrant, mixed-use development with 105 units of affordable housing. The Governor has also announced requests for proposals to redevelop the former Bayview Correctional Facility and Javits Center's Site K in Manhattan and the former Downstate Correctional Facility in Fishkill with an emphasis on housing.
To detect anomalies in images with your model, you must first start your model with the StartModel operation. The Amazon Lookout for Vision console provides AWS CLI commands that you can use to start and stop your model. This section includes example code that you can use.
When you start your model, Amazon Lookout for Vision provisions a minimum of one compute resource, known as an inference unit. You specify the number of inference units to use in the MinInferenceUnits input parameter to the StartModel API. The default allocation for a model is 1 inference unit.
You are charged for the number of hours that your model is running and for the number of inference units that your model uses while it's running, based on how you configure the running of your model. For example, if you start the model with two inference units and use the model for 8 hours, you are charged for 16 inference hours (8 hours running time * two inference units). For more information, see Amazon Lookout for Vision Pricing. If you don't explicitly stop your model by calling StopModel, you are charged even if you are not actively analyzing images with your model.
The algorithm that Lookout for Vision uses to train the model. When you train a model, multiple models are trained. Lookout for Vision selects the model with the best performance based on the size of the dataset and its composition of normal and anomalous images.
You can increase or decrease the throughput of your model depending on the demands on your application. To increase throughput, use additional inference units. Each additional inference unit increases your processing speed by one inference unit. For information about calculating the number of inference units that you need, see Calculate inference units for Amazon Rekognition Custom Labels and Amazon Lookout for Vision models. If you want to change the supported throughput of your model, you have two options:
If your model has to accommodate spikes in demand, Amazon Lookout for Vision can automatically scale the number of inference units that your model uses. As demand increases, Amazon Lookout for Vision adds additional inference units to the model and removes them when demand decreases.
To let Lookout for Vision automatically scale inference units for a model, start the model and set the maximum number of inference units that it can use by using the MaxInferenceUnits parameter. Setting a maximum number of inference units lets you manage the cost of running the model by limiting the number of inference units available to it. If you don't specify a maximum number of units, Lookout for Vision won't automatically scale your model, only using the number of inference units that you started with. For information regarding the maximum number of inference units, see Service Quotas.
You can also specify a minimum number of inference units by using the MinInferenceUnits parameter. This lets you specify the minimum throughput for your model, where a single inference unit represents 1 hour of processing time.
Amazon Lookout for Vision; distributes inference units across multiple Availability Zones within an AWS Region to provide increased availability. For more information, see Availability Zones. To help protect your production models from Availability Zone outages and inference unit failures, start your production models with at least two inference units.
d3342ee215