but partition spec exists" in Athena? in the AWS The bigsql user can grant execute permission on the HCAT_SYNC_OBJECTS procedure to any user, group or role and that user can execute this stored procedure manually if necessary. To make the restored objects that you want to query readable by Athena, copy the For The following AWS resources can also be of help: Athena topics in the AWS knowledge center, Athena posts in the The cache fills the next time the table or dependents are accessed. TABLE statement. For more information, see When I run an Athena query, I get an "access denied" error in the AWS location. might have inconsistent partitions under either of the following receive the error message FAILED: NullPointerException Name is solution is to remove the question mark in Athena or in AWS Glue. -- create a partitioned table from existing data /tmp/namesAndAges.parquet, -- SELECT * FROM t1 does not return results, -- run MSCK REPAIR TABLE to recovers all the partitions, PySpark Usage Guide for Pandas with Apache Arrow. apache spark - Hive shell are not compatible with Athena. For more information, see Recover Partitions (MSCK REPAIR TABLE). When you use the AWS Glue Data Catalog with Athena, the IAM policy must allow the glue:BatchCreatePartition action. can be due to a number of causes. All rights reserved. type BYTE. fail with the error message HIVE_PARTITION_SCHEMA_MISMATCH. How do New in Big SQL 4.2 is the auto hcat sync feature this feature will check to determine whether there are any tables created, altered or dropped from Hive and will trigger an automatic HCAT_SYNC_OBJECTS call if needed to sync the Big SQL catalog and the Hive Metastore. characters separating the fields in the record. template. in the AWS Knowledge Center. For more information, However, if the partitioned table is created from existing data, partitions are not registered automatically in the Hive metastore. This error usually occurs when a file is removed when a query is running. retrieval or S3 Glacier Deep Archive storage classes. For For steps, see SELECT (CTAS), Using CTAS and INSERT INTO to work around the 100 This can be done by executing the MSCK REPAIR TABLE command from Hive. The cache will be lazily filled when the next time the table or the dependents are accessed. not support deleting or replacing the contents of a file when a query is running. MSCK repair is a command that can be used in Apache Hive to add partitions to a table. For example, if you have an For information about troubleshooting workgroup issues, see Troubleshooting workgroups. define a column as a map or struct, but the underlying Performance tip call the HCAT_SYNC_OBJECTS stored procedure using the MODIFY instead of the REPLACE option where possible. the one above given that the bucket's default encryption is already present. (UDF). data column is defined with the data type INT and has a numeric . This feature is available from Amazon EMR 6.6 release and above. PARTITION to remove the stale partitions If the JSON text is in pretty print REPAIR TABLE detects partitions in Athena but does not add them to the This is controlled by spark.sql.gatherFastStats, which is enabled by default. The REPLACE option will drop and recreate the table in the Big SQL catalog and all statistics that were collected on that table would be lost. Athena can also use non-Hive style partitioning schemes. input JSON file has multiple records. However this is more cumbersome than msck > repair table. An Error Is Reported When msck repair table table_name Is Run on Hive INFO : Semantic Analysis Completed limitations. msck repair table tablenamehivelocationHivehive . There are two ways if the user still would like to use those reserved keywords as identifiers: (1) use quoted identifiers, (2) set hive.support.sql11.reserved.keywords =false. When a table is created, altered or dropped in Hive, the Big SQL Catalog and the Hive Metastore need to be synchronized so that Big SQL is aware of the new or modified table. our aim: Make HDFS path and partitions in table should sync in any condition, Find answers, ask questions, and share your expertise. When creating a table using PARTITIONED BY clause, partitions are generated and registered in the Hive metastore. Make sure that there is no Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support. : To learn more on these features, please refer our documentation. For more information, see When I query CSV data in Athena, I get the error "HIVE_BAD_DATA: Error not a valid JSON Object or HIVE_CURSOR_ERROR: MSCK INFO : Semantic Analysis Completed The following pages provide additional information for troubleshooting issues with compressed format? Amazon Athena. The next section gives a description of the Big SQL Scheduler cache. UTF-8 encoded CSV file that has a byte order mark (BOM). Convert the data type to string and retry. Prior to Big SQL 4.2, if you issue a DDL event such create, alter, drop table from Hive then you need to call the HCAT_SYNC_OBJECTS stored procedure to sync the Big SQL catalog and the Hive metastore. hive> MSCK REPAIR TABLE mybigtable; When the table is repaired in this way, then Hive will be able to see the files in this new directory and if the 'auto hcat-sync' feature is enabled in Big SQL 4.2 then Big SQL will be able to see this data as well. 06:14 AM, - Delete the partitions from HDFS by Manual. How to Update or Drop a Hive Partition? - Spark By {Examples} resolve the "unable to verify/create output bucket" error in Amazon Athena? matches the delimiter for the partitions. MSCK REPAIR TABLE recovers all the partitions in the directory of a table and updates the Hive metastore. For information about troubleshooting federated queries, see Common_Problems in the awslabs/aws-athena-query-federation section of AWS Lambda, the following messages can be expected. The solution is to run CREATE To work around this limit, use ALTER TABLE ADD PARTITION partition limit. To identify lines that are causing errors when you does not match number of filters. Error when running MSCK REPAIR TABLE in parallel - Azure Databricks There is no data.Repair needs to be repaired. Supported browsers are Chrome, Firefox, Edge, and Safari. This command updates the metadata of the table. in the AWS Background Two, operation 1. No results were found for your search query. field value for field x: For input string: "12312845691"", When I query CSV data in Athena, I get the error "HIVE_BAD_DATA: Error You are running a CREATE TABLE AS SELECT (CTAS) query If the policy doesn't allow that action, then Athena can't add partitions to the metastore. This task assumes you created a partitioned external table named emp_part that stores partitions outside the warehouse. This message can occur when a file has changed between query planning and query increase the maximum query string length in Athena? use the ALTER TABLE ADD PARTITION statement. Sometimes you only need to scan a part of the data you care about 1. It can be useful if you lose the data in your Hive metastore or if you are working in a cloud environment without a persistent metastore. in the AWS Knowledge Center. The resolution is to recreate the view. For example, if you transfer data from one HDFS system to another, use MSCK REPAIR TABLE to make the Hive metastore aware of the partitions on the new HDFS. This error can occur when you query an Amazon S3 bucket prefix that has a large number If these partition information is used with Show Parttions Table_Name, you need to clear these partition former information. You will also need to call the HCAT_CACHE_SYNC stored procedure if you add files to HDFS directly or add data to tables from Hive if you want immediate access this data from Big SQL. A column that has a Run MSCK REPAIR TABLE to register the partitions. query results location in the Region in which you run the query. Check the integrity See HIVE-874 and HIVE-17824 for more details. Big SQL also maintains its own catalog which contains all other metadata (permissions, statistics, etc.) Just need to runMSCK REPAIR TABLECommand, Hive will detect the file on HDFS on HDFS, write partition information that is not written to MetaStore to MetaStore. statement in the Query Editor. Temporary credentials have a maximum lifespan of 12 hours. list of functions that Athena supports, see Functions in Amazon Athena or run the SHOW FUNCTIONS You can also use a CTAS query that uses the This task assumes you created a partitioned external table named emp_part that stores partitions outside the warehouse. specified in the statement. To work correctly, the date format must be set to yyyy-MM-dd If you've got a moment, please tell us what we did right so we can do more of it. NULL or incorrect data errors when you try read JSON data In Big SQL 4.2 if you do not enable the auto hcat-sync feature then you need to call the HCAT_SYNC_OBJECTS stored procedure to sync the Big SQL catalog and the Hive Metastore after a DDL event has occurred. By giving the configured batch size for the property hive.msck.repair.batch.size it can run in the batches internally. A good use of MSCK REPAIR TABLE is to repair metastore metadata after you move your data files to cloud storage, such as Amazon S3. Amazon S3 bucket that contains both .csv and do not run, or only write data to new files or partitions. When you may receive the error message Access Denied (Service: Amazon By limiting the number of partitions created, it prevents the Hive metastore from timing out or hitting an out of memory error. in the Msck Repair Table - Ibm Statistics can be managed on internal and external tables and partitions for query optimization. The bucket also has a bucket policy like the following that forces retrieval, Specifying a query result Problem: There is data in the previous hive, which is broken, causing the Hive metadata information to be lost, but the data on the HDFS on the HDFS is not lost, and the Hive partition is not shown after returning the form. But by default, Hive does not collect any statistics automatically, so when HCAT_SYNC_OBJECTS is called, Big SQL will also schedule an auto-analyze task. Azure Databricks uses multiple threads for a single MSCK REPAIR by default, which splits createPartitions () into batches. For routine partition creation, If there are repeated HCAT_SYNC_OBJECTS calls, there will be no risk of unnecessary Analyze statements being executed on that table. The MSCK REPAIR TABLE command was designed to manually add partitions that are added query a bucket in another account in the AWS Knowledge Center or watch *', 'a', 'REPLACE', 'CONTINUE')"; -Tells the Big SQL Scheduler to flush its cache for a particular schema CALL SYSHADOOP.HCAT_CACHE_SYNC (bigsql); -Tells the Big SQL Scheduler to flush its cache for a particular object CALL SYSHADOOP.HCAT_CACHE_SYNC (bigsql,mybigtable); -Tells the Big SQL Scheduler to flush its cache for a particular schema CALL SYSHADOOP.HCAT_SYNC_OBJECTS(bigsql,mybigtable,a,MODIFY,CONTINUE); CALL SYSHADOOP.HCAT_CACHE_SYNC (bigsql); Auto-analyze in Big SQL 4.2 and later releases. 2023, Amazon Web Services, Inc. or its affiliates. If you are using this scenario, see. by another AWS service and the second account is the bucket owner but does not own 100 open writers for partitions/buckets. For more information, see How Use ALTER TABLE DROP If files corresponding to a Big SQL table are directly added or modified in HDFS or data is inserted into a table from Hive, and you need to access this data immediately, then you can force the cache to be flushed by using the HCAT_CACHE_SYNC stored procedure. MSCK REPAIR TABLE - Amazon Athena This can be done by executing the MSCK REPAIR TABLE command from Hive. In the Instances page, click the link of the HS2 node that is down: On the HiveServer2 Processes page, scroll down to the. CREATE TABLE AS but partition spec exists" in Athena? INFO : Compiling command(queryId, b1201dac4d79): show partitions repair_test in the AWS Knowledge The list of partitions is stale; it still includes the dept=sales For possible causes and classifiers, Considerations and For To transform the JSON, you can use CTAS or create a view. files, custom JSON it worked successfully. When creating a table using PARTITIONED BY clause, partitions are generated and registered in the Hive metastore. Using Parquet modular encryption, Amazon EMR Hive users can protect both Parquet data and metadata, use different encryption keys for different columns, and perform partial encryption of only sensitive columns. When a table is created from Big SQL, the table is also created in Hive. null You might see this exception when you query a HIVE_UNKNOWN_ERROR: Unable to create input format. Hive ALTER TABLE command is used to update or drop a partition from a Hive Metastore and HDFS location (managed table). CDH 7.1 : MSCK Repair is not working properly if - Cloudera MSCK REPAIR TABLE Use this statement on Hadoop partitioned tables to identify partitions that were manually added to the distributed file system (DFS). Repair partitions manually using MSCK repair The MSCK REPAIR TABLE command was designed to manually add partitions that are added to or removed from the file system, but are not present in the Hive metastore. This occurs because MSCK REPAIR TABLE doesn't remove stale partitions from table can I troubleshoot the error "FAILED: SemanticException table is not partitioned BOMs and changes them to question marks, which Amazon Athena doesn't recognize. Partitioning data in Athena - Amazon Athena can I troubleshoot the error "FAILED: SemanticException table is not partitioned query a table in Amazon Athena, the TIMESTAMP result is empty. The INFO : Completed executing command(queryId, show partitions repair_test; Planning a New Cloudera Enterprise Deployment, Step 1: Run the Cloudera Manager Installer, Migrating Embedded PostgreSQL Database to External PostgreSQL Database, Storage Space Planning for Cloudera Manager, Manually Install Cloudera Software Packages, Creating a CDH Cluster Using a Cloudera Manager Template, Step 5: Set up the Cloudera Manager Database, Installing Cloudera Navigator Key Trustee Server, Installing Navigator HSM KMS Backed by Thales HSM, Installing Navigator HSM KMS Backed by Luna HSM, Uninstalling a CDH Component From a Single Host, Starting, Stopping, and Restarting the Cloudera Manager Server, Configuring Cloudera Manager Server Ports, Moving the Cloudera Manager Server to a New Host, Migrating from PostgreSQL Database Server to MySQL/Oracle Database Server, Starting, Stopping, and Restarting Cloudera Manager Agents, Sending Usage and Diagnostic Data to Cloudera, Exporting and Importing Cloudera Manager Configuration, Modifying Configuration Properties Using Cloudera Manager, Viewing and Reverting Configuration Changes, Cloudera Manager Configuration Properties Reference, Starting, Stopping, Refreshing, and Restarting a Cluster, Virtual Private Clusters and Cloudera SDX, Compatibility Considerations for Virtual Private Clusters, Tutorial: Using Impala, Hive and Hue with Virtual Private Clusters, Networking Considerations for Virtual Private Clusters, Backing Up and Restoring NameNode Metadata, Configuring Storage Directories for DataNodes, Configuring Storage Balancing for DataNodes, Preventing Inadvertent Deletion of Directories, Configuring Centralized Cache Management in HDFS, Configuring Heterogeneous Storage in HDFS, Enabling Hue Applications Using Cloudera Manager, Post-Installation Configuration for Impala, Configuring Services to Use the GPL Extras Parcel, Tuning and Troubleshooting Host Decommissioning, Comparing Configurations for a Service Between Clusters, Starting, Stopping, and Restarting Services, Introduction to Cloudera Manager Monitoring, Viewing Charts for Cluster, Service, Role, and Host Instances, Viewing and Filtering MapReduce Activities, Viewing the Jobs in a Pig, Oozie, or Hive Activity, Viewing Activity Details in a Report Format, Viewing the Distribution of Task Attempts, Downloading HDFS Directory Access Permission Reports, Troubleshooting Cluster Configuration and Operation, Authentication Server Load Balancer Health Tests, Impala Llama ApplicationMaster Health Tests, Navigator Luna KMS Metastore Health Tests, Navigator Thales KMS Metastore Health Tests, Authentication Server Load Balancer Metrics, HBase RegionServer Replication Peer Metrics, Navigator HSM KMS backed by SafeNet Luna HSM Metrics, Navigator HSM KMS backed by Thales HSM Metrics, Choosing and Configuring Data Compression, YARN (MRv2) and MapReduce (MRv1) Schedulers, Enabling and Disabling Fair Scheduler Preemption, Creating a Custom Cluster Utilization Report, Configuring Other CDH Components to Use HDFS HA, Administering an HDFS High Availability Cluster, Changing a Nameservice Name for Highly Available HDFS Using Cloudera Manager, MapReduce (MRv1) and YARN (MRv2) High Availability, YARN (MRv2) ResourceManager High Availability, Work Preserving Recovery for YARN Components, MapReduce (MRv1) JobTracker High Availability, Cloudera Navigator Key Trustee Server High Availability, Enabling Key Trustee KMS High Availability, Enabling Navigator HSM KMS High Availability, High Availability for Other CDH Components, Navigator Data Management in a High Availability Environment, Configuring Cloudera Manager for High Availability With a Load Balancer, Introduction to Cloudera Manager Deployment Architecture, Prerequisites for Setting up Cloudera Manager High Availability, High-Level Steps to Configure Cloudera Manager High Availability, Step 1: Setting Up Hosts and the Load Balancer, Step 2: Installing and Configuring Cloudera Manager Server for High Availability, Step 3: Installing and Configuring Cloudera Management Service for High Availability, Step 4: Automating Failover with Corosync and Pacemaker, TLS and Kerberos Configuration for Cloudera Manager High Availability, Port Requirements for Backup and Disaster Recovery, Monitoring the Performance of HDFS Replications, Monitoring the Performance of Hive/Impala Replications, Enabling Replication Between Clusters with Kerberos Authentication, How To Back Up and Restore Apache Hive Data Using Cloudera Enterprise BDR, How To Back Up and Restore HDFS Data Using Cloudera Enterprise BDR, Migrating Data between Clusters Using distcp, Copying Data between a Secure and an Insecure Cluster using DistCp and WebHDFS, Using S3 Credentials with YARN, MapReduce, or Spark, How to Configure a MapReduce Job to Access S3 with an HDFS Credstore, Importing Data into Amazon S3 Using Sqoop, Configuring ADLS Access Using Cloudera Manager, Importing Data into Microsoft Azure Data Lake Store Using Sqoop, Configuring Google Cloud Storage Connectivity, How To Create a Multitenant Enterprise Data Hub, Configuring Authentication in Cloudera Manager, Configuring External Authentication and Authorization for Cloudera Manager, Step 2: Install JCE Policy Files for AES-256 Encryption, Step 3: Create the Kerberos Principal for Cloudera Manager Server, Step 4: Enabling Kerberos Using the Wizard, Step 6: Get or Create a Kerberos Principal for Each User Account, Step 7: Prepare the Cluster for Each User, Step 8: Verify that Kerberos Security is Working, Step 9: (Optional) Enable Authentication for HTTP Web Consoles for Hadoop Roles, Kerberos Authentication for Non-Default Users, Managing Kerberos Credentials Using Cloudera Manager, Using a Custom Kerberos Keytab Retrieval Script, Using Auth-to-Local Rules to Isolate Cluster Users, Configuring Authentication for Cloudera Navigator, Cloudera Navigator and External Authentication, Configuring Cloudera Navigator for Active Directory, Configuring Groups for Cloudera Navigator, Configuring Authentication for Other Components, Configuring Kerberos for Flume Thrift Source and Sink Using Cloudera Manager, Using Substitution Variables with Flume for Kerberos Artifacts, Configuring Kerberos Authentication for HBase, Configuring the HBase Client TGT Renewal Period, Using Hive to Run Queries on a Secure HBase Server, Enable Hue to Use Kerberos for Authentication, Enabling Kerberos Authentication for Impala, Using Multiple Authentication Methods with Impala, Configuring Impala Delegation for Hue and BI Tools, Configuring a Dedicated MIT KDC for Cross-Realm Trust, Integrating MIT Kerberos and Active Directory, Hadoop Users (user:group) and Kerberos Principals, Mapping Kerberos Principals to Short Names, Configuring TLS Encryption for Cloudera Manager and CDH Using Auto-TLS, Manually Configuring TLS Encryption for Cloudera Manager, Manually Configuring TLS Encryption on the Agent Listening Port, Manually Configuring TLS/SSL Encryption for CDH Services, Configuring TLS/SSL for HDFS, YARN and MapReduce, Configuring Encrypted Communication Between HiveServer2 and Client Drivers, Configuring TLS/SSL for Navigator Audit Server, Configuring TLS/SSL for Navigator Metadata Server, Configuring TLS/SSL for Kafka (Navigator Event Broker), Configuring Encrypted Transport for HBase, Data at Rest Encryption Reference Architecture, Resource Planning for Data at Rest Encryption, Optimizing Performance for HDFS Transparent Encryption, Enabling HDFS Encryption Using the Wizard, Configuring the Key Management Server (KMS), Configuring KMS Access Control Lists (ACLs), Migrating from a Key Trustee KMS to an HSM KMS, Migrating Keys from a Java KeyStore to Cloudera Navigator Key Trustee Server, Migrating a Key Trustee KMS Server Role Instance to a New Host, Configuring CDH Services for HDFS Encryption, Backing Up and Restoring Key Trustee Server and Clients, Initializing Standalone Key Trustee Server, Configuring a Mail Transfer Agent for Key Trustee Server, Verifying Cloudera Navigator Key Trustee Server Operations, Managing Key Trustee Server Organizations, HSM-Specific Setup for Cloudera Navigator Key HSM, Integrating Key HSM with Key Trustee Server, Registering Cloudera Navigator Encrypt with Key Trustee Server, Preparing for Encryption Using Cloudera Navigator Encrypt, Encrypting and Decrypting Data Using Cloudera Navigator Encrypt, Converting from Device Names to UUIDs for Encrypted Devices, Configuring Encrypted On-disk File Channels for Flume, Installation Considerations for Impala Security, Add Root and Intermediate CAs to Truststore for TLS/SSL, Authenticate Kerberos Principals Using Java, Configure Antivirus Software on CDH Hosts, Configure Browser-based Interfaces to Require Authentication (SPNEGO), Configure Browsers for Kerberos Authentication (SPNEGO), Configure Cluster to Use Kerberos Authentication, Convert DER, JKS, PEM Files for TLS/SSL Artifacts, Obtain and Deploy Keys and Certificates for TLS/SSL, Set Up a Gateway Host to Restrict Access to the Cluster, Set Up Access to Cloudera EDH or Altus Director (Microsoft Azure Marketplace), Using Audit Events to Understand Cluster Activity, Configuring Cloudera Navigator to work with Hue HA, Cloudera Navigator support for Virtual Private Clusters, Encryption (TLS/SSL) and Cloudera Navigator, Limiting Sensitive Data in Navigator Logs, Preventing Concurrent Logins from the Same User, Enabling Audit and Log Collection for Services, Monitoring Navigator Audit Service Health, Configuring the Server for Policy Messages, Using Cloudera Navigator with Altus Clusters, Configuring Extraction for Altus Clusters on AWS, Applying Metadata to HDFS and Hive Entities using the API, Using the Purge APIs for Metadata Maintenance Tasks, Troubleshooting Navigator Data Management, Files Installed by the Flume RPM and Debian Packages, Configuring the Storage Policy for the Write-Ahead Log (WAL), Using the HBCK2 Tool to Remediate HBase Clusters, Exposing HBase Metrics to a Ganglia Server, Configuration Change on Hosts Used with HCatalog, Accessing Table Information with the HCatalog Command-line API, Unable to connect to database with provided credential, Unknown Attribute Name exception while enabling SAML, Downloading query results from Hue takes long time, 502 Proxy Error while accessing Hue from the Load Balancer, Hue Load Balancer does not start after enabling TLS, Unable to kill Hive queries from Job Browser, Unable to connect Oracle database to Hue using SCAN, Increasing the maximum number of processes for Oracle database, Unable to authenticate to Hbase when using Hue, ARRAY Complex Type (CDH 5.5 or higher only), MAP Complex Type (CDH 5.5 or higher only), STRUCT Complex Type (CDH 5.5 or higher only), VARIANCE, VARIANCE_SAMP, VARIANCE_POP, VAR_SAMP, VAR_POP, Configuring Resource Pools and Admission Control, Managing Topics across Multiple Kafka Clusters, Setting up an End-to-End Data Streaming Pipeline, Kafka Security Hardening with Zookeeper ACLs, Configuring an External Database for Oozie, Configuring Oozie to Enable MapReduce Jobs To Read/Write from Amazon S3, Configuring Oozie to Enable MapReduce Jobs To Read/Write from Microsoft Azure (ADLS), Starting, Stopping, and Accessing the Oozie Server, Adding the Oozie Service Using Cloudera Manager, Configuring Oozie Data Purge Settings Using Cloudera Manager, Dumping and Loading an Oozie Database Using Cloudera Manager, Adding Schema to Oozie Using Cloudera Manager, Enabling the Oozie Web Console on Managed Clusters, Scheduling in Oozie Using Cron-like Syntax, Installing Apache Phoenix using Cloudera Manager, Using Apache Phoenix to Store and Access Data, Orchestrating SQL and APIs with Apache Phoenix, Creating and Using User-Defined Functions (UDFs) in Phoenix, Mapping Phoenix Schemas to HBase Namespaces, Associating Tables of a Schema to a Namespace, Understanding Apache Phoenix-Spark Connector, Understanding Apache Phoenix-Hive Connector, Using MapReduce Batch Indexing to Index Sample Tweets, Near Real Time (NRT) Indexing Tweets Using Flume, Using Search through a Proxy for High Availability, Enable Kerberos Authentication in Cloudera Search, Flume MorphlineSolrSink Configuration Options, Flume MorphlineInterceptor Configuration Options, Flume Solr UUIDInterceptor Configuration Options, Flume Solr BlobHandler Configuration Options, Flume Solr BlobDeserializer Configuration Options, Solr Query Returns no Documents when Executed with a Non-Privileged User, Installing and Upgrading the Sentry Service, Configuring Sentry Authorization for Cloudera Search, Synchronizing HDFS ACLs and Sentry Permissions, Authorization Privilege Model for Hive and Impala, Authorization Privilege Model for Cloudera Search, Frequently Asked Questions about Apache Spark in CDH, Developing and Running a Spark WordCount Application, Accessing Data Stored in Amazon S3 through Spark, Accessing Data Stored in Azure Data Lake Store (ADLS) through Spark, Accessing Avro Data Files From Spark SQL Applications, Accessing Parquet Files From Spark SQL Applications, Building and Running a Crunch Application with Spark, Best Practices for Using MSCK REPAIR TABLE, Tuning Apache Hive Performance on the Amazon S3 Filesystem in CDH, Tuning Hive MSCK (Metastore Check) Performance on S3, In Cloudera Manager, from the home page, go to.