Hadoop+Cassandra (1) - MapperでCassandra -

2012-04-06T00:00:00+09:00 Cassandra Hadoop Java

HadoopからCassandraのColumn Familyを読んで云々してみる。とりあえずまったく意味の無いことで書くので、あくまでHadoopからCassandraを使う場合の設定例とか

(基礎的なHadoopのセットアップは省略します)

環境

Hadoopのバージョンは0.20.205.0 Cassandraのバージョンは1.0.8辺り

(CassandraがサポートしているHadoopバージョンは0.20.x系らしいので)

Hadoop側の設定

特に設定する項目無いんですが、HadoopからCassandraのライブラリを参照出来ないといけないので

  • apache-cassandra-バージョン.jar
  • libthrift-バージョン.jar
  • guava-バージョン.jar

を参照できるようにしないといけないので$HADOOP_HOME/conf/hadoop-env.shのHADOOP_CLASSPATH辺りに設定しとく。でCassandraに伴う設定はmapred-site.xmlでも設定しておくことが可能みたいだけど、それはとりあえずやらない((JobConfにぶっこむ))。とりあえずここまでやったらHadoopのMapReduce JobTrackerあたりを起動するまでをやっておく(start-dfs.shとstart-mapred.shとか)

Cassandra側の設定

Cassandra側に以下を実行して、Column Familyなどを作っておく

create keyspace Keyspace1;
use Keyspace1;

create column family Sample with default_validation_class = UTF8Type and key_validation_class = UTF8Type and comparator = UTF8Type;

# 適当なデータを突っ込む
set Sample[0][name] = "hoge";
set Sample[1][name] = "fuga";
set Sample[2][name] = "foobar";
set Sample[3][name] = "hoge";
set Sample[4][name] = "hoge";

HadoopClient1.java

package sample;

import java.util.Arrays;

import org.apache.cassandra.hadoop.ColumnFamilyInputFormat;
import org.apache.cassandra.hadoop.ConfigHelper;
import org.apache.cassandra.thrift.SlicePredicate;
import org.apache.cassandra.utils.ByteBufferUtil;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

import sample.mapreduce.SampleReducer;
import sample.mapreduce.SampleMapper;

public class HadoopClient1 extends Configured implements Tool {

    public static void main(String[] args) throws Exception {
        ToolRunner.run(new Configuration(), new HadoopClient1(), args);
    }

    @Override
    public int run(String[] arg0) throws Exception {
        JobConf conf = new JobConf();
        conf.setJar("sample.jar");

        ConfigHelper.setInitialAddress(conf, "127.0.0.1");
        ConfigHelper.setRpcPort(conf, "9160");
        ConfigHelper.setInputColumnFamily(conf, "Keyspace1", "Sample");

        SlicePredicate predicate = new SlicePredicate().setColumn_names(Arrays.asList(ByteBufferUtil.bytes("name")));
        ConfigHelper.setInputSlicePredicate(conf, predicate);

        Job job = new Job(conf);
        job.setMapperClass(SampleCassandraMapper.class);
        job.setReducerClass(SampleReducer.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        job.setInputFormatClass(ColumnFamilyInputFormat.class);

        FileOutputFormat.setOutputPath(job, new Path("output"));

        return job.waitForCompletion(true) ? 0 : 1;
    }
}

SampleCassandraMapper.java

package sample.mapreduce;

import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.SortedMap;

import org.apache.cassandra.db.IColumn;
import org.apache.cassandra.utils.ByteBufferUtil;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class SampleCassandraMapper extends Mapper<ByteBuffer, SortedMap<ByteBuffer, IColumn>, Text, IntWritable> {
    @Override
    public void map(ByteBuffer key, SortedMap<ByteBuffer, IColumn> columns, Context ctx) throws IOException, InterruptedException {
        IColumn column = columns.get(ByteBufferUtil.bytes("name"));

        if (column == null) {
            return;
        }

        String str = ByteBufferUtil.string(column.value());

        ctx.write(new Text(str), new IntWritable(1));
    }
}

SampleReducer.java

package sample.mapreduce;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class SampleReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
    @Override
    public void reduce(Text word, Iterable<IntWritable> values, Context ctx) throws IOException, InterruptedException {
        int cnt = 0;

        for (IntWritable w : values) {
            cnt += w.get();
        }

        ctx.write(word, new IntWritable(cnt));
    }
}

pom.xml

<?xml version="1.0" ?>
<project
    xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>
    <groupId>省略</groupId>
    <artifactId>省略</artifactId>
    <version>省略</version>
    <name>省略</name>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>2.3.2</version>
                <configuration>
                    <source>1.6</source>
                    <target>1.6</target>
                    <encoding>UTF-8</encoding>
                </configuration>
            </plugin>
        </plugins>
    </build>

    <dependencies>
        <dependency>
            <groupId>org.apache.cassandra</groupId>
            <artifactId>cassandra-all</artifactId>
            <version>1.0.7</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-core</artifactId>
            <version>0.20.205.0</version>
        </dependency>
    </dependencies>
</project>

Hadoop+Cassandra (2) - ReducerでCassandra - Cassandra使ってみた