Skip to content Skip to sidebar Skip to footer

Max And Min For Several Fields Inside Pcollection In Apache Beam With Python

I am using apache beam via python SDK and have the following problem: I have a PCollection with approximately 1 mln entries, each entry in a PCollection looks like a list of 2-tupl

Solution 1:

I decided to give it a go using a custom CombineFn function to determine the minimum and maximum per each key. Then, join them with the input data using CoGroupByKey and apply the desired mapping to normalize the values.

"""Normalize PCollection values."""import logging
import argparse
import sys

import apache_beam as beam
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions


# custom CombineFn that outputs min and max valueclassMinMaxFn(beam.CombineFn):
  # initialize min and max values (I assumed int type)defcreate_accumulator(self):
    return (sys.maxint, 0)

  # update if current value is a new min or max      defadd_input(self, min_max, input):
    (current_min, current_max) = min_max
    returnmin(current_min, input), max(current_max, input)

  defmerge_accumulators(self, accumulators):
    return accumulators

  defextract_output(self, min_max):
    return min_max


defrun(argv=None):
  """Main entry point; defines and runs the pipeline."""
  parser = argparse.ArgumentParser()
  parser.add_argument('--output',
                      dest='output',
                      required=True,
                      help='Output file to write results to.')
  known_args, pipeline_args = parser.parse_known_args(argv)

  pipeline_options = PipelineOptions(pipeline_args)
  p = beam.Pipeline(options=pipeline_options)

  # create test data
  pc = [('foo', 1), ('bar', 5), ('foo', 5), ('bar', 9), ('bar', 2)]

  # first run through data to apply custom combineFn and determine min/max per key
  minmax = pc | 'Determine Min Max' >> beam.CombinePerKey(MinMaxFn())

  # group input data by key and append corresponding min and max 
  merged = (pc, minmax) | 'Join Pcollections' >> beam.CoGroupByKey()

  # apply mapping to normalize values according to 'norm_value = (value - min) / (max - min)'
  normalized = merged | 'Normalize values' >> beam.Map(lambda (a, (b, c)): (a, [float(val - c[0][0][0])/(c[0][0][1] -c[0][0][0]) for val in b]))

  # write results to output file
  normalized | 'Write results' >> WriteToText(known_args.output)

  result = p.run()
  result.wait_until_finish()

if __name__ == '__main__':
  logging.getLogger().setLevel(logging.INFO)
  run()

The snippet can be run with python SCRIPT_NAME.py --output OUTPUT_FILENAME. My test data, grouped by key, is:

('foo', [1, 5])
('bar', [5, 9, 2])

The CombineFn will return per key min and max:

('foo', [(1, 5)])
('bar', [(2, 9)])

The output of the join/cogroup by key operation:

('foo', ([1, 5], [[(1, 5)]]))
('bar', ([5, 9, 2], [[(2, 9)]]))

And after normalizing:

('foo', [0.0, 1.0])
('bar', [0.42857142857142855, 1.0, 0.0])

This was just a simple test so I’m sure it can be optimized for the mentioned volume of data but it seems to work as a starting point. Take into account that further considerations might be needed (i.e. avoid dividing by zero if min = max)

Post a Comment for "Max And Min For Several Fields Inside Pcollection In Apache Beam With Python"