Just to add to Fred's email -
To write to Redshift or Postgres currently from the Kinesis pipeline, you setup a lambda architecture, whereby the Kinesis S3 component writes the raw events (from the Scala Stream Collector) to S3, and then the regular Snowplow batch pipeline processes them through to Redshift or Postgres.
As Fred says, we want to support real-time dripfeeding of events into Redshift (and Postgres and other relational storage targets), but there is a lot of upfront work to do first on Iglu (our schema repository system) to prepare for this.