Welcome to Polarr's SDK, in here you will find thorough documentation on how to execute our framework to build beautiful creations.
git clone //swift repo
Computational photography is a growing field. Use cases are abundant, here are some we found necessary and actionable with our current SDK platform.
After importing PolarrKit you will be able to use the following models.
Model | Output |
---|---|
SmartCrop |
Normalized coordinates to crop parent image. |
Segmentation |
Masked Image of a provided subject matter. |
SimilarityGrouping |
Group photos to a label. |
AestheticRanking |
Aesthetic scoring of an image. |
Depth |
Generate a depth map from any photo. |
import PolarrKit
Our SmartCrop model can take two simple parameters an image and aspect ratio to constrain the results. If no aspect ratio is provided the best scored crop possible will be chosen.
let sourceImage : UIImage = UIImage(contentsOfFile: "/path/to/image")
if let crop = SmartCrop(on: sourceImage) {
//normalized crop coordinates
}
if let crop = SmartCrop(on: sourceImage, aspectRatio: 1.0) {
//normalized crop coordinates
}
Paramaters | Type |
---|---|
on: |
UIImage or CVPixelBuffer |
aspectRatio: |
Double (Optional) |
Output | Type |
---|---|
SmartCrop.Output |
CGRect |
Our Segmentation model can take two simple parameters an image and aspect ratio to constrain the results. If no aspect ratio is provided the best scored crop possible will be chosen.
If no label is provided, the label person
is automatically.
let sourceImage : UIImage = UIImage(contentsOfFile: "/path/to/image")
if let segmentedDog = Segmentation(on: sourceImage, label: "dog") {
//If there is a visible dog, it will be segmented out into a new UIImage object.
}
["background", "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"]
Parameters | Type |
---|---|
on: |
UIImage |
label: |
String |
Output | Type |
---|---|
TextureToImage.Output |
UIImage |
if let segmentationMaskOfDog = SegmentationMask(on: sourceImage, label: "dog") {
//The segmentation mask of said dog.
}
Similarity Grouping takes an array of images and outputs an 2 dimensional integer array categorizing possible similarities.
let sourceImages = [UIImage(contentsOfFile: "/1.JPG"), UIImage(contentsOfFile: "/2.JPG"), UIImage(contentsOfFile: "/3.JPG"), UIImage(contentsOfFile: "/4.JPG"), UIImage(contentsOfFile: "/5.JPG"), UIImage(contentsOfFile: "/6.JPG"), UIImage(contentsOfFile: "/7.JPG")]
if let similarities = SimilarityGrouping(on: sourceImages) {
//Example output: [[0], [1], [2], [3], [4], [5, 6]]
}
Parameters | Type |
---|---|
on: |
[UIImage] |
Outputs | Type |
---|---|
SimilarityGrouping.Output |
[[Int]] |
Useful aesthetic scores can be returned on any image processed.
let sourceImage : UIImage = UIImage(contentsOfFile: "/path/to/image")
if let aestheticScore = AestheticScoring(on: sourceImage) {
//aestheticScore.colorHarmony <== scores can be accessed as so
}
Parameters | Type |
---|---|
on: |
UIImage or CVPixelBuffer |
Output | Type |
---|---|
.colorHarmony |
Double |
.depthOfField |
Double |
.lighting |
Double |
.Output.vividColor |
Double |
.Output.motionBlur |
Double |
.Output.interestingContent |
Double |
.objectsEmphasis |
Double |
.compositionalBalance |
Double |
.ruleOfThirds |
Double |
.repetition |
Double |
.symmetry |
Double |
Passing an image into Depth
will result in an image based depth map that can be iterated through to analyze a range of pixel values to manipulate depth with.
let sourceImage : UIImage = UIImage(contentsOfFile: "/path/to/image")
if let depthMap = Depth(on: sourceImage) {
//image
}
Input | Type |
---|---|
on: |
UIImage |
Output | Type |
---|---|
Depth.Output |
UIImage |
let segmentation = ["mask" : Segmentation(label: "person") --> ImageToTexture() --> GaussianBlurTexture(sigma: 3), "image" : ImageToTexture()] --> MaskTexture() --> TextureToImage()