Oracle Import - Encountering error "ORA-00933: SQL command not properly ended" with parallel import

1,712 views
Skip to first unread message

ed Djatsa

unread,
May 9, 2011, 9:12:13 AM5/9/11
to Sqoop Users
Hi,
I am using Sqoop 1.2.0-cdh3u0, the jdbc driver I use is
ojdbc14_g.jar, when trying to perform a parallel import from Oracle
DB, with the command:
> sqoop import --connect jdbc:oracle:thin:@//DBServer:1521/DBname --username myuser --target-dir import_dir --split-by ID --query 'select * from TABLE_NAME where ID < 100 and $CONDITIONS' --verbose -P

I get the following error :
..................................
...................................
11/05/09 14:15:23 ERROR tool.ImportTool: Encountered IOException
running import job: java.io.IOException: ORA-00933: SQL command not
properly ended
at
com.cloudera.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrive
nDBInputFormat.java:
201)
at
org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:944)
at
org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:961)
................................
.......................................
However when disabling parallel import by limiting the number of
mappers to 1 with -m 1 parameter all just work fine!
From the docummentation I read that, with parallel import, each map
task executes a copy of the query, with results partitioned by
bounding conditions inferred by Sqoop, could the above error be
caused
by the modified query executed by each map task? If it is the case
how
can I see the exact query that is executed and eventually solve the
issue.
Thanks in advance and Best Regards,

Ed

P.S : I found that there is the OraOop plugin for Sqoop to perform
imports from Oracle DBs with better performances, however it requires
some privileges which I don't have, so when I try using it I get :
ORA-00942: table or view does not exist. I therefore have to stick to
the default Sqoop Connection Manager.

Peter Hall

unread,
May 9, 2011, 7:36:47 PM5/9/11
to sqoop...@cloudera.org
Hi Ed,

Split the where clause into a separate argument:

sqoop import --connect jdbc:oracle:thin:@//DBServer:1521/DBname --username myuser --target-dir import_dir --split-by ID --query 'select * from TABLE_NAME' --where 'where ID < 100 and $CONDITIONS' --verbose -P

Sqoop needs to know if you are using a where clause or not so it can modify the clause to split the work for parallel processing by multiple mappers.

With Oraoop: Would you mind turning on debug logging and telling me what query is causing an error? Add -D oraoop.logging.level=debug

Cheers,
Peter Hall
Quest Software

________________________________________
From: ed Djatsa [djat...@gmail.com]
Sent: Monday, 9 May 2011 23:12
To: Sqoop Users
Subject: [sqoop-user] Oracle Import - Encountering error "ORA-00933: SQL command not properly ended" with parallel import

ed Djatsa

unread,
May 10, 2011, 6:14:09 AM5/10/11
to Sqoop Users
Hi Peter, thanks for your reply,

In the following line I will refer to the command : " sqoop import --
connect jdbc:oracle:thin:@//DBServer:1521/DBname --username myuser --
target-dir import_dir --split-by ID " AS "sqoop import ..."

I retried with the suggested modification and I got :

"ERROR tool.ImportTool: Encountered IOException running import job:
java.io.IOException: Query [select * from TABLE_NAME] must contain
'$CONDITIONS' in WHERE clause.
at com.cloudera.sqoop.orm.ClassWriter.generate(ClassWriter.java:913)

I then tried both:
> sqoop import ... --query 'select * from TABLE_NAME where $CONDITIONS' --where 'ID < 1000' --verbose -P
and sqoop import ... --query 'select * from TABLE_NAME where
$CONDITIONS' --where 'ID < 1000 and $CONDITIONS'

And I still got :
"Encountered IOException running import job: java.io.IOException:
ORA-00933: SQL command not properly ended

at
com.cloudera.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:
201) "

I changed to :
> sqoop import ... --table TABLE_NAME --where --where 'ID < 1000' --verbose -P
This time the job fails with error :

11/05/10 11:26:56 INFO mapred.JobClient: map 0% reduce 0%
11/05/10 11:27:11 INFO mapred.JobClient: Task Id :
attempt_201104261126_0045_m_000002_0, Status : FAILED
java.lang.NullPointerException
at
com.cloudera.sqoop.mapreduce.db.DataDrivenDBRecordReader.getSelectQuery(DataDrivenDBRecordReader.java:
87)
at
com.cloudera.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:
225)
at org.apache.hadoop.mapred.MapTask
$NewTrackingRecordReader.nextKeyValue(MapTask.java:455)
at
org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:
67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at
com.cloudera.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:
187)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:646)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:322)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:
1115)
at org.apache.hadoop.mapred.Child.main(Child.java:262)

attempt_201104261126_0045_m_000002_0: log4j:WARN No appenders could be
found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201104261126_0045_m_000002_0: log4j:WARN Please initialize the
log4j system properly.
11/05/10 11:27:12 INFO mapred.JobClient: Task Id :
attempt_201104261126_0045_m_000001_0, Status : FAILED

Even the full import below fails with the same error :
> sqoop import ... --table TABLE_NAME --verbose -P


Now trying with Oraopp :

> sqoop import -D oraoop.logging.level=debug ...--table TABLE_NAME --where 'ID < 1000'
gives me the error :
11/05/10 11:33:10 DEBUG tool.BaseSqoopTool: Enabled debug logging.
Enter password:
11/05/10 11:33:14 DEBUG util.ClassLoaderStack: Checking for existing
class: com.quest.oraoop.OraOopManagerFactory
11/05/10 11:33:14 DEBUG util.ClassLoaderStack: Class is already
available. Skipping jar /Tools/Sqoop/lib/oraoop-1.2.0.62.jar
11/05/10 11:33:14 DEBUG sqoop.ConnFactory: Added factory
com.quest.oraoop.OraOopManagerFactory in jar /Tools/Sqoop/lib/
oraoop-1.2.0.62.jar specified by /Tools/sqoop-1.2.0-cdh3u0/bin/../conf/
managers.d/oraoop
11/05/10 11:33:14 DEBUG sqoop.ConnFactory: Loaded manager factory:
com.quest.oraoop.OraOopManagerFactory
11/05/10 11:33:14 DEBUG sqoop.ConnFactory: Loaded manager factory:
com.cloudera.sqoop.manager.DefaultManagerFactory
11/05/10 11:33:14 DEBUG sqoop.ConnFactory: Trying ManagerFactory:
com.quest.oraoop.OraOopManagerFactory
11/05/10 11:33:14 DEBUG oraoop.OraOopUtilities: Enabled OraOop debug
logging.
11/05/10 11:33:14 DEBUG oraoop.OraOopManagerFactory: OraOop can be
called by Sqoop!
11/05/10 11:33:15 DEBUG oraoop.OraOopUtilities: The Oracle table
context has been derived from:
oracleConnectionUserName = MYUSER
tableStr = TABLE_NAME
as:
owner : MYUSER
table : TABLE_NAME
11/05/10 11:33:15 WARN oraoop.OraOopManagerFactory: Unable to
determine the Oracle-type of the object named TABLE_NAME owned by
TABLE_OWNER.
Error:
ORA-00942: table or view does not exist

11/05/10 11:33:15 WARN oraoop.OraOopManagerFactory: Unable to
determine whether the Oracle table TABLE_NAME.TABLE_OWNER is an index-
organized table.
Error:
ORA-00942: table or view does not exist

11/05/10 11:33:15 INFO oraoop.OraOopManagerFactory:
*******************************************
*** Using OraOop 1.2.0.62 ***
*** Copyright 2011 Quest Software, Inc. ***
*** ALL RIGHTS RESERVED. ***
*******************************************
11/05/10 11:33:15 INFO oraoop.OraOopManagerFactory: Oracle Database
version: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 -
64bi
11/05/10 11:33:15 INFO oraoop.OraOopManagerFactory: This Oracle
database is not a RAC.
11/05/10 11:33:15 WARN oraoop.OraOopManagerFactory: Unable to parse
the JDBC connection URL "jdbc:oracle:thin:@//ncepspdb01:1521/
funcsysFQSFUNC1.nce.amadeus.net" as a connection that uses the Oracle
'thin' JDBC driver.
This problem prevents OraOop from being able to dynamically generate
JDBC URLs that specify 'dedicated server connections' or spread mapper
sessions across multiple Oracle instances.
If the JDBC driver-type is 'OCI' (instead of 'thin'), then load-
balancing should be appropriately managed automatically.
11/05/10 11:33:15 DEBUG sqoop.ConnFactory: Instantiated ConnManager
com.quest.oraoop.OraOopConnManager@186fa9fc
11/05/10 11:33:15 INFO tool.CodeGenTool: Beginning code generation
11/05/10 11:33:15 DEBUG oraoop.OraOopOracleQueries:
getTableColumns() : sql =
SELECT column_name, data_type FROM dba_tab_columns WHERE owner = ?
and table_name = ? and (DATA_TYPE IN
('BINARY_DOUBLE','BINARY_FLOAT','BLOB','CHAR','CLOB','DATE','FLOAT','LONG','NCHAR','NCLOB','NUMBER','NVARCHAR2','ROWID','URITYPE','VARCHAR2')
OR DATA_TYPE LIKE 'INTERVAL YEAR(%) TO MONTH' OR DATA_TYPE LIKE
'INTERVAL DAY(%) TO SECOND(%)' OR DATA_TYPE LIKE 'TIMESTAMP(%)' OR
DATA_TYPE LIKE 'TIMESTAMP(%) WITH TIME ZONE' OR DATA_TYPE LIKE
'TIMESTAMP(%) WITH LOCAL TIME ZONE') ORDER BY column_id
11/05/10 11:33:15 ERROR sqoop.Sqoop: Got exception running Sqoop:
java.lang.RuntimeException: java.sql.SQLException: ORA-00942: table or
view does not exist

java.lang.RuntimeException: java.sql.SQLException: ORA-00942: table or
view does not exist

at
com.quest.oraoop.OraOopConnManager.getColumnNamesInOracleTable(OraOopConnManager.java:
119)
at
com.quest.oraoop.OraOopConnManager.getSelectedColumnNamesInOracleTable(OraOopConnManager.java:
130)
at
com.quest.oraoop.OraOopConnManager.getColTypesQuery(OraOopConnManager.java:
193)
at
com.cloudera.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:
160)
at
com.quest.oraoop.OraOopConnManager.getColumnTypes(OraOopConnManager.java:
455)
at com.cloudera.sqoop.orm.ClassWriter.generate(ClassWriter.java:908)
at com.cloudera.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:
82)
at com.cloudera.sqoop.tool.ImportTool.importTable(ImportTool.java:
337)
at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:423)
at com.cloudera.sqoop.Sqoop.run(Sqoop.java:144)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:180)
at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:218)
at com.cloudera.sqoop.Sqoop.main(Sqoop.java:228)
Caused by: java.sql.SQLException: ORA-00942: table or view does not
exist

When Trying the list-databases command :
> sqoop list-databases -D oraoop.logging.level=debug ...

I get an exlicit error because even though OraOop displays a warning
that it doesn't support the LIST_DATABASE Command, it tells me that it
seems I don't have privileges on DBA Table which is correct:

11/05/10 11:42:35 DEBUG tool.BaseSqoopTool: Enabled debug logging.
Enter password:
11/05/10 11:42:39 DEBUG util.ClassLoaderStack: Checking for existing
class: com.quest.oraoop.OraOopManagerFactory
11/05/10 11:42:39 DEBUG util.ClassLoaderStack: Class is already
available. Skipping jar /Tools/Sqoop/lib/oraoop-1.2.0.62.jar
11/05/10 11:42:39 DEBUG sqoop.ConnFactory: Added factory
com.quest.oraoop.OraOopManagerFactory in jar /Tools/Sqoop/lib/
oraoop-1.2.0.62.jar specified by /Tools/sqoop-1.2.0-cdh3u0/bin/../conf/
managers.d/oraoop
11/05/10 11:42:39 DEBUG sqoop.ConnFactory: Loaded manager factory:
com.quest.oraoop.OraOopManagerFactory
11/05/10 11:42:39 DEBUG sqoop.ConnFactory: Loaded manager factory:
com.cloudera.sqoop.manager.DefaultManagerFactory
11/05/10 11:42:39 DEBUG sqoop.ConnFactory: Trying ManagerFactory:
com.quest.oraoop.OraOopManagerFactory
11/05/10 11:42:39 DEBUG oraoop.OraOopUtilities: Enabled OraOop debug
logging.
11/05/10 11:42:39 DEBUG oraoop.OraOopManagerFactory: OraOop can be
called by Sqoop!
11/05/10 11:42:39 DEBUG oraoop.OraOopManagerFactory: The Sqoop tool
name "LIST-DATABASES" is not supported by OraOop
java.lang.IllegalArgumentException: No enum const class
com.quest.oraoop.OraOopConstants$Sqoop$Tool.LIST-DATABASES
at java.lang.Enum.valueOf(Enum.java:196)
at com.quest.oraoop.OraOopConstants$Sqoop
$Tool.valueOf(OraOopConstants.java:312)
....
.....
11/05/10 11:42:39 DEBUG manager.OracleManager: Creating a new
connection for jdbc:oracle:thin:@//DBServer:1521/DBname
11/05/10 11:42:40 INFO manager.OracleManager: Time zone has been set
to GMT
11/05/10 11:42:40 ERROR manager.OracleManager: The catalog view
DBA_USERS was not found. This may happen if the user does not have DBA
privileges. Please check privileges and try again.
11/05/10 11:42:40 DEBUG manager.OracleManager: Full trace for
ORA-00942 exception
java.sql.SQLException: ORA-00942: table or view does not exist

at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:
145)


P.S : the following is without OraOop
Something quite strange to me is when I run the import with a single
mapper in this way :
> sqoop import ... --table TABLE_NAME --split-by OWNER_ID -m1 --verbose -P
It fails with error :

11/05/10 11:52:33 INFO mapred.JobClient: Running job:
job_201104261126_0051
11/05/10 11:52:34 INFO mapred.JobClient: map 0% reduce 0%
11/05/10 11:52:41 INFO mapred.JobClient: Task Id :
attempt_201104261126_0051_m_000000_0, Status : FAILED
java.lang.NullPointerException
at
com.cloudera.sqoop.mapreduce.db.DataDrivenDBRecordReader.getSelectQuery(DataDrivenDBRecordReader.java:
87)
at
com.cloudera.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:
225)


But when I run it with --query parameter and single mapper, it works
fine :
> sqoop import ... --query 'select * from TABLE_NAME where ID<1000 AND $CONDITIONS' -m1 --verbose -P

How could you explain this behaviour and how to solve definitely the
parallel import issue mentioned above ?

Thanks in advance,

Regards,
Ed



On May 10, 1:36 am, Peter Hall <Peter.H...@quest.com> wrote:
> Hi Ed,
>
> Split the where clause into a separate argument:
>
> sqoop import  --connect jdbc:oracle:thin:@//DBServer:1521/DBname --username myuser  --target-dir import_dir --split-by ID --query 'select * from TABLE_NAME' --where 'where ID < 100 and $CONDITIONS'  --verbose -P
>
> Sqoop needs to know if you are using a where clause or not so it can modify the clause to split the work for parallel processing by multiple mappers.
>
> With Oraoop: Would you mind turning on debug logging and telling me what query is causing an error? Add -D oraoop.logging.level=debug
>
> Cheers,
> Peter Hall
> Quest Software
>
> ________________________________________
> From: ed Djatsa [djatsa...@gmail.com]

Peter Hall

unread,
May 10, 2011, 7:53:18 PM5/10/11
to sqoop...@cloudera.org
Looks like some of your problems are caused by using single quotes around your query instead of double quotes. This prevents environment variable expansion in the quoted string so $CONDITIONS isn't being replaced with your conditions.

In one case you have 2 where clauses, one in --table "... where $CONDITIONS" and a second in a --where argument, these should be combined into a single --where "$CONDITIONS and ..."

The OraOop issues are caused by not having necessary permissions, see section 3 of the oraoop user guide for details.

I'm not sure what's causing the parallel import issue. Can you share any details about your schema and what permissions you have on it?

Cheers,
Peter Hall
Quest Software
________________________________________

From: ed Djatsa [djat...@gmail.com]
Sent: Tuesday, 10 May 2011 20:14
To: Sqoop Users
Subject: [sqoop-user] Re: Oracle Import - Encountering error "ORA-00933: SQL command not properly ended" with parallel import

ed Djatsa

unread,
May 12, 2011, 11:40:15 AM5/12/11
to Sqoop Users
Hi,
I managed to get a user account with appropriate privileges, now Sqoop
import works fine with OraOop!
However when I try a hive-import with sqoop to get the imported
table's metadata into hive metastore in order to be able to execute
SQL Like queries on the distributed data, I encounter some errors.
Here is the command I execute :

> sqoop import -D oraoop.logging.level=debug -m 6 --connect jdbc:oracle:thin:@//dbserver:1521/mydb --username myuser --table DBFUNC1.R1_RECORD --split-by ID --where 'DBFUNC1.R1_RECORD.ID IN (34208,1664,2151,2270,2293,33132,1891,2297,1410,1433,1579,1983,10271,44892,420,262140,574,988,2163,754,1036,1611,1620,39777)' --hive-import --verbose -P

The import is performed correctly an the data is uploaded on the HDFS
but when it comes to creating the Hive metadata I get an error, here
is the output resume :

***End of import phase ***
.......................
11/05/12 10:15:00 INFO mapreduce.ImportJobBase: Transferred 67.9446 MB
in 109.9914 seconds (632.5525 KB/sec)
11/05/12 10:15:00 INFO mapreduce.ImportJobBase: Retrieved 1711245
records.


***Beginning of Hive metadata creation ***
11/05/12 10:15:00 INFO hive.HiveImport: Loading uploaded data into
Hive
11/05/12 10:15:00 DEBUG hive.HiveImport: Hive.inputTable:
DBFUNC1.REC1_RECORD
11/05/12 10:15:00 DEBUG hive.HiveImport: Hive.outputTable:
DBFUNC1.REC1_RECORD
11/05/12 10:15:00 INFO manager.SqlManager: Executing SQL statement:
SELECT
ID,AIRLINE_CODE,TARIFF_NUMBER,RULE,FA_CLASS,LAST_MODIFID,ACCESS_KEY,AGREEMENT
FROM DBFUNC1.R1_RECORD WHERE 1=0
11/05/12 10:15:00 WARN hive.TableDefWriter: Column ID had to be cast
to a less precise type in Hive
11/05/12 10:15:00 WARN hive.TableDefWriter: Column TARIFF_NUMBER had
to be cast to a less precise type in Hive
11/05/12 10:15:00 WARN hive.TableDefWriter: Column LAST_MODIFID had to
be cast to a less precise type in Hive
11/05/12 10:15:00 WARN hive.TableDefWriter: Column ACCESS_KEY had to
be cast to a less precise type in Hive
11/05/12 10:15:00 WARN hive.TableDefWriter: Column AGREEMENT had to be
cast to a less precise type in Hive
11/05/12 10:15:00 DEBUG hive.TableDefWriter: Create statement: CREATE
TABLE IF NOT EXISTS `DBFUNC1.R1_RECORD` ( `ID` DOUBLE, `AR_CODE`
STRING, `TARIFF_NUMBER` DOUBLE, `RULE` STRING, `FA_CLASS` STRING,
`LAST_MODIFID` DOUBLE, `ACCESS_KEY` DOUBLE, `AGREEMENT` DOUBLE)
COMMENT 'Imported by sqoop on 2011/05/12 10:15:00' ROW FORMAT
DELIMITED FIELDS TERMINATED BY '\001' LINES TERMINATED BY '\012'
STORED AS TEXTFILE
11/05/12 10:15:00 DEBUG hive.TableDefWriter: Load statement: LOAD DATA
INPATH 'hdfs://.../myuser/DBFUNC1.R1_RECORD' INTO TABLE
`DBFUNC1.R1_RECORD`
11/05/12 10:15:00 DEBUG hive.HiveImport: Using external Hive process.
11/05/12 10:15:02 INFO hive.HiveImport: Hive history file=/tmp/
edjatsay/hive_job_log_edjatsay_201105121015_627399115.txt
11/05/12 10:15:08 INFO hive.HiveImport: FAILED: Error in metadata:
InvalidObjectException(message:Database DBFUNC1 doesn't exist.)


The folder "hdfs://.../myuser/DBFUNC1.R1_RECORD" is present an
contains the imported data, but I can't understand why Hive gives me
this error : "hive.HiveImport: FAILED: Error in metadata:
InvalidObjectException(message:Database DBFUNC1 doesn't exist.) " why
is it looking for a database named DBFUNC1 ?

Regards,
Ed



On May 11, 1:53 am, Peter Hall <Peter.H...@quest.com> wrote:
> Looks like some of your problems are caused by using single quotes around your query instead of double quotes. This prevents environment variable expansion in the quoted string so $CONDITIONS isn't being replaced with your conditions.
>
> In one case you have 2 where clauses, one in --table "... where $CONDITIONS" and a second in a --where argument, these should be combined into a single --where "$CONDITIONS and ..."
>
> The OraOop issues are caused by not having necessary permissions, see section 3 of the oraoop user guide for details.
>
> I'm not sure what's causing the parallel import issue. Can you share any details about your schema and what permissions you have on it?
>
> Cheers,
> Peter Hall
> Quest Software
> ________________________________________
> From: ed Djatsa [djatsa...@gmail.com]
> 11/05/10 11:42:39 DEBUG oraoop.OraOopManagerFactory:...
>
> read more »

Aaron Kimball

unread,
May 12, 2011, 1:43:03 PM5/12/11
to sqoop...@cloudera.org
It's trying to create a Hive table with the same name as the imported table, "DBFUNC1.R1_RECORD". Hive is getting confused by the "." in the middle, I believe.

Try manually specifying a Hive table name with --hive-table-name.
- Aaron

ed Djatsa

unread,
May 16, 2011, 8:59:49 AM5/16/11
to Sqoop Users
Thanks Aaron, it works now smoothly with the --hive-table-name option.
However there is another serious issue that I can't figure out how to
solve :
Before I was trying simple queries with Hive and it was working
correctly, then I tried a more complex query with a JOIN
an I get and error :
This is the query I run :
hive > select r1_record.id,r1_data.fa_type from r1_record JOIN r1_data
ON (r1_record.id=r1_data.id) ;

I get the following output :
Starting Job = job_201105161109_0001, Tracking URL = ....
Kill Command = .... -kill job_201105161109_0001
2011-05-16 11:23:21,999 Stage-1 map = 0%, reduce = 0%
2011-05-16 11:23:48,148 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201105161109_0001 with errors
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask

In the logs of my jobTracker it says :
java.lang.RuntimeException: java.io.FileNotFoundException:
HIVE_PLANf0ff0b6e-dd82-4962-9678-1f689f1c8ee0 (No such file or
directory)

I found that this issue had already been reported at :
https://issues.apache.org/jira/browse/HIVE-1019
I tried applying the patches listed there but I get this errors :

patching file ql/src/java/org/apache/hadoop/hive/ql/exec/
Utilities.java
Hunk #1 FAILED at 123.
Hunk #2 FAILED at 189.
Hunk #3 FAILED at 214.
3 out of 3 hunks FAILED -- saving rejects to file ql/src/java/org/
apache/hadoop/hive/ql/exec/Utilities.java.rej
patching file ql/src/java/org/apache/hadoop/hive/ql/io/
CombineHiveRecordReader.java
Hunk #2 succeeded at 47 (offset 6 lines).
patching file ql/src/java/org/apache/hadoop/hive/ql/io/
CombineHiveInputFormat.java
Hunk #1 succeeded at 18 with fuzz 1.
Hunk #2 succeeded at 36 with fuzz 2 (offset 7 lines).
Hunk #3 FAILED at 59.
Hunk #4 succeeded at 191 (offset 114 lines).
Hunk #5 FAILED at 207.
2 out of 5 hunks FAILED -- saving rejects to file ql/src/java/org/
apache/hadoop/hive/ql/io/CombineHiveInputFormat.java.rej
patching file ql/src/java/org/apache/hadoop/hive/ql/io/
HiveInputFormat.java
Hunk #2 FAILED at 62.
Hunk #3 succeeded at 97 (offset 3 lines).
1 out of 3 hunks FAILED -- saving rejects to file ql/src/java/org/
apache/hadoop/hive/ql/io/HiveInputFormat.java.rej
patching file ql/src/java/org/apache/hadoop/hive/ql/io/
CombineHiveInputSplit.java
patching file ql/src/java/org/apache/hadoop/hive/ql/io/
HiveInputSplit.java
patching file ql/src/java/org/apache/hadoop/hive/ql/exec/
Utilities.java
Hunk #1 FAILED at 123.
Hunk #2 FAILED at 189.
Hunk #3 FAILED at 214.
3 out of 3 hunks FAILED -- saving rejects to file ql/src/java/org/
apache/hadoop/hive/ql/exec/Utilities.java.rej
patching file ql/src/java/org/apache/hadoop/hive/ql/io/
CombineHiveRecordReader.java
Reversed (or previously applied) patch detected! Assume -R? [n] y
Hunk #2 succeeded at 47 (offset 6 lines).
patching file ql/src/java/org/apache/hadoop/hive/ql/io/
CombineHiveInputFormat.java
Reversed (or previously applied) patch detected! Assume -R? [n] y
Hunk #1 succeeded at 18 with fuzz 1.
Hunk #2 succeeded at 38 with fuzz 2 (offset 7 lines).
Hunk #3 FAILED at 62.
Hunk #4 succeeded at 558 (offset 296 lines).
Hunk #5 FAILED at 574.
2 out of 5 hunks FAILED -- saving rejects to file ql/src/java/org/
apache/hadoop/hive/ql/io/CombineHiveInputFormat.java.rej
patching file ql/src/java/org/apache/hadoop/hive/ql/io/
HiveInputFormat.java
Reversed (or previously applied) patch detected! Assume -R? [n] y
Hunk #2 FAILED at 64.
......
.......

After patching (not sure it was successful, or may be those patches
are already applied to hive-0.7.0-cdh3u0 ? ) I tried again an I get
the same error in Hive.

Thanks again for your concern,
Regards
Ed
> ...
>
> read more »

Aaron Kimball

unread,
May 16, 2011, 2:26:14 PM5/16/11
to sqoop...@cloudera.org
Hi Ed,

I'm afraid this is outside my domain of expertise. You may have better luck on Cloudera's platform mailing list (cdh-...@cloudera.org) or the Apache Hive mailing list (us...@hive.apache.org).

Regards,
- Aaron
Reply all
Reply to author
Forward
0 new messages