Class GafferKeyRangePartitioner
- java.lang.Object
-
- org.apache.hadoop.mapreduce.Partitioner<org.apache.accumulo.core.data.Key,org.apache.hadoop.io.Writable>
-
- uk.gov.gchq.gaffer.accumulostore.operation.hdfs.handler.job.partitioner.GafferKeyRangePartitioner
-
- All Implemented Interfaces:
org.apache.hadoop.conf.Configurable
public class GafferKeyRangePartitioner extends org.apache.hadoop.mapreduce.Partitioner<org.apache.accumulo.core.data.Key,org.apache.hadoop.io.Writable> implements org.apache.hadoop.conf.Configurable
Copy ofKeyRangePartitioner
and swaps theRangePartitioner
for theGafferRangePartitioner
to fix a bug with opening the split points file.
-
-
Constructor Summary
Constructors Constructor Description GafferKeyRangePartitioner()
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description org.apache.hadoop.conf.Configuration
getConf()
int
getPartition(org.apache.accumulo.core.data.Key key, org.apache.hadoop.io.Writable value, int numPartitions)
void
setConf(org.apache.hadoop.conf.Configuration conf)
static void
setNumSubBins(org.apache.hadoop.mapreduce.Job job, int num)
Sets the number of random sub-bins per rangestatic void
setSplitFile(org.apache.hadoop.mapreduce.Job job, String file)
Sets the hdfs file name to use, containing a newline separated list of Base64 encoded split points that represent ranges for partitioning
-
-
-
Method Detail
-
getPartition
public int getPartition(org.apache.accumulo.core.data.Key key, org.apache.hadoop.io.Writable value, int numPartitions)
- Specified by:
getPartition
in classorg.apache.hadoop.mapreduce.Partitioner<org.apache.accumulo.core.data.Key,org.apache.hadoop.io.Writable>
-
getConf
public org.apache.hadoop.conf.Configuration getConf()
- Specified by:
getConf
in interfaceorg.apache.hadoop.conf.Configurable
-
setConf
public void setConf(org.apache.hadoop.conf.Configuration conf)
- Specified by:
setConf
in interfaceorg.apache.hadoop.conf.Configurable
-
setSplitFile
public static void setSplitFile(org.apache.hadoop.mapreduce.Job job, String file)
Sets the hdfs file name to use, containing a newline separated list of Base64 encoded split points that represent ranges for partitioning- Parameters:
job
- the jobfile
- the splits file
-
setNumSubBins
public static void setNumSubBins(org.apache.hadoop.mapreduce.Job job, int num)
Sets the number of random sub-bins per range- Parameters:
job
- the jobnum
- the number of sub bins
-
-