One of the often overlooked sources of workload is the user traffic
(coming into cloud) itself. Lots of compute cycles are expended on a
user even before she/he fires off an application. These compute cycles
run a workload that has lot of similarities with traditional
datamining. Only that in this new datamining, the data is real time
and has expiry date on it. The workload involves evolving a user's
profile based on real-time/near-time usage characteristics. In other
words, the cloud itself is a datawarehouse even though it may host
stateless applications and no user uploaded data. The cloud has
extensive built-in mining tools that organize and extract information
from this data and affect the very applications that brought the user
to the cloud.
So from a cloud customer's perspective having an application in the
cloud gives them far more information about the usage of the
application than they could ever get from having that application on a
server farm. Information that they can use in development, monitoring,
marketing and metering of the application. This information is more
valuable in than the application itself.
Of course, it brings out mainly privacy issues (not necessarily
security). How security practice will evolve to handle those issues is
a potential avenue for research in the cloud computing space.
Thanks,
Vikas
> --
> You received this message because you are subscribed to the Google Groups "Cloud Security Alliance" group.
> To post to this group, send email to cloudsecur...@googlegroups.com.
> To unsubscribe from this group, send email to cloudsecurityall...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/cloudsecurityalliance?hl=en.
>
>