I've setup the following in my laptop.
- Apache 2.7.1 Hadoop, single node
- Hive 2.1.0 running on local mode with metastore on MySQL. It is not in thriftserver mode
- Spark 2.0.0
- Scala 2.1.1
I've placed hive-site.xml from hive/conf to spark/conf. When i goto spark-shell, i can use sql context to create tables in hive and query them. I can access all the tables in hive. My issue is, when I use Eclipse IDE, i'm not able to connect to existing hive.
Eclipse can connect to existing spark master to submit the jobs, i can see the jobs in UI but when i use sparkSession to connect to hive, it always creates its own Derby database. I've searched extensively, but i cannot figure out.
- Should the hive be setup in thriftserver mode for eclipse to connect. Why is eclipse not using existing hive installation.
- Is there a way to connect to my existing hive installation from eclipse.
I'm not sure what else to look for.