You may lose some data. See
https://issues.apache.org/jira/browse/HADOOP-5333
They tried to fix this bug in Hadoop 0.19.1, but it was still not
implemented correctly. And they didn't change it in 0.19.2 and
0.20.x.
"IMPORT FROM" clause just simply copies the log file from your local
drive to HDFS. So it depends on the size of your log file and the
speed of your network connection. How big is your log file?
Yanbo
On Apr 21, 11:56 am, Dhanya Aishwarya Palanisamy
<
dhanya.aishwa...@gmail.com> wrote:
> Hi Yanbo,
>
> Thanks for the reply. I am using hadoop 0.20.2 Any idea what kind of
> problems I will be facing if I try to append?
> Also if I use IMPORT FROM syntax to read the file how much will be the
> overhead for copying a large file into HDFS?
> To avoid this scenario, we thought of writing logs directly into HDFS so
> that cloudbase can query it quickly.
>
> Thanks,
> Dhanya
>
>
>
>
>
> On Thu, Apr 22, 2010 at 12:06 AM, yanbo <
ruya...@gmail.com> wrote:
> > In Hadoop 0.19.0, the file append API has been disabled due to
> > implementation issues that can lead to data loss. It was re-enabled in
> > 0.19.1 and 0.20.x, but the implementation still had some problem. So
> > it's better to write logs in into a file on your local directory, then
> > upload it the HDFS once it's finished.
>
> > Yanbo
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "CloudBase" group.
> > To post to this group, send email to
cloudba...@googlegroups.com.
> > To unsubscribe from this group, send email to
> >
cloudbase-use...@googlegroups.com<cloudbase-users%2Bunsubscribe@
googlegroups.com>
> > .
> For more options, visit this group athttp://
groups.google.com/group/cloudbase-users?hl=en.- Hide quoted text -
>
> - Show quoted text -