Skip to content

Refactor capture and transformation handles #59

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 84 commits into from
Sep 17, 2020
Merged
Show file tree
Hide file tree
Changes from 72 commits
Commits
Show all changes
84 commits
Select commit Hold shift + click to select a range
2535591
Simple test
shagren Jul 23, 2020
29716b2
More functions supported
shagren Aug 4, 2020
009148a
Merge pull request #1 from shagren/threading-gil-releasing
shagren Aug 4, 2020
68e9505
Fixes, Example
shagren Aug 5, 2020
0b2ab2f
Merge pull request #2 from shagren/threading-gil-releasing
shagren Aug 5, 2020
21f38a2
Merge remote-tracking branch 'upstream/develop' into develop
shagren Aug 10, 2020
516bfb1
Merge remote-tracking branch 'upstream/develop' into develop
shagren Aug 10, 2020
1b0c0b1
Merge remote-tracking branch 'upstream/develop' into develop
shagren Aug 19, 2020
3fa1349
Merge remote-tracking branch 'upstream/develop' into develop
shagren Aug 26, 2020
889072f
Add code quality tool
shagren Aug 26, 2020
6411a4a
Add .mypy_cache to .gitignore
shagren Aug 26, 2020
58005f0
Make line length 120 chars
shagren Aug 31, 2020
c94b0da
Initial playback support
shagren Aug 28, 2020
1e88ed5
Seek support
shagren Aug 28, 2020
d0264c2
Add tests
shagren Aug 28, 2020
44982bb
Format tests too
shagren Aug 28, 2020
fa2323d
Remove commented code
shagren Aug 28, 2020
04b0639
Smaller asset.
shagren Aug 31, 2020
52c6a64
Fix build
shagren Sep 1, 2020
e66dad4
Remove debug
shagren Sep 1, 2020
4af8758
Validate if open() method called twice
shagren Sep 1, 2020
9a7765a
Remove unused file placeholder
shagren Sep 1, 2020
146536f
Better typing
shagren Sep 1, 2020
da8fdd7
Merge branch 'develop' into mkv-support-3
lpasselin Sep 1, 2020
1264175
Merge branch 'develop' into mkv-support-3
shagren Sep 3, 2020
60f892b
Merge develop
shagren Sep 3, 2020
b5ce7b2
CI changes
shagren Sep 3, 2020
f6efd0a
CI changes
shagren Sep 3, 2020
152a38a
CI changes
shagren Sep 3, 2020
904aa25
CI changes
shagren Sep 3, 2020
ec25eac
CI changes
shagren Sep 3, 2020
04ce188
CI changes
shagren Sep 3, 2020
f6ca706
Some changes in tests definitions
shagren Sep 3, 2020
8009436
Fix Maklefile
shagren Sep 3, 2020
cbacd61
Fix
shagren Sep 3, 2020
ce95f04
Update playback_seek_timestamp function description
shagren Sep 3, 2020
71b8871
Typofix
shagren Sep 3, 2020
f769915
Update readme with small details.
lpasselin Sep 3, 2020
e5e5f98
rename _thread_safe to thread_safe
lpasselin Sep 3, 2020
e0f54a1
reformat line for readability
lpasselin Sep 3, 2020
36e63c9
force ubuntu-18.04. Version supported by SDK
lpasselin Sep 3, 2020
1c09d70
WIP
shagren Sep 1, 2020
1d938f9
WIP
shagren Sep 1, 2020
f091db3
Fix build
shagren Sep 1, 2020
11cabd3
Rebase mkv-support-3
shagren Sep 1, 2020
79f3960
Refactor tests
shagren Sep 1, 2020
063eacf
WIP: color controls
shagren Sep 2, 2020
337eed6
More color-module tests
shagren Sep 3, 2020
9f9125d
Refactor device_get_color_control_capabilities()
shagren Sep 3, 2020
c9f22ea
WIP: IMU support
shagren Sep 3, 2020
d6599fe
Merge remote-tracking branch 'upstream/develop' into refactor-transfo…
shagren Sep 3, 2020
0569641
CI fix
shagren Sep 3, 2020
cdec3e7
start/stop cameras support
shagren Sep 3, 2020
bed83b1
device_get_capture support
shagren Sep 3, 2020
c707908
Remove debug lines
shagren Sep 3, 2020
29d50b2
support of get_capture, get_imu_sample
shagren Sep 4, 2020
c4e4448
WIP: Calibration
shagren Sep 4, 2020
b1e2a11
WIP: Capture
shagren Sep 4, 2020
444e4b1
Support device_get_raw_calibration
shagren Sep 7, 2020
471b236
Merge branch 'develop' into refactor-transformation
shagren Sep 7, 2020
5ed8db0
Support creating calibration from json file
shagren Sep 7, 2020
b447310
convert_3d_to_3d support
shagren Sep 7, 2020
4757c0b
calibration_2d_to_3d support
shagren Sep 8, 2020
8dfb34d
WIP: Transformations support
shagren Sep 8, 2020
0798eba
Transformation functions
shagren Sep 9, 2020
b2ad1d5
Refactor transformation
shagren Sep 9, 2020
97ef4bc
Refactor examples
shagren Sep 10, 2020
2be3314
Add benchmark example
shagren Sep 10, 2020
8289354
Fix tests
shagren Sep 10, 2020
ac6ca02
Better playback example
shagren Sep 10, 2020
55145a5
Fix playback example
shagren Sep 10, 2020
7dd0034
Rollback some text changes
shagren Sep 10, 2020
5dfd125
rename capsule_xxxx_name to CAPSULE_XXXX_NAME
shagren Sep 11, 2020
8ede0c7
Text fix
shagren Sep 11, 2020
af0a144
Typo fix
shagren Sep 11, 2020
de36550
Refactor examples
shagren Sep 11, 2020
cc99d74
CR changes
shagren Sep 11, 2020
622092e
CR changes
shagren Sep 11, 2020
2013b63
CR changes
shagren Sep 11, 2020
8538aca
CR changes
shagren Sep 11, 2020
c95b9db
CR changes
shagren Sep 11, 2020
7cb043d
add py.typed to distribution
shagren Sep 15, 2020
9e37a8f
remove not required _start_imu() call
lpasselin Sep 17, 2020
d58ba19
fix PytestAssertRewriteWarning
lpasselin Sep 17, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -65,4 +65,4 @@ jobs:
pip install -e .
- name: Run tests
run: |
make test
make test-no-hardware
15 changes: 12 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
SOURCES=pyk4a example tests

TESTS=tests
.PHONY: setup fmt lint test help build
.SILENT: help
help:
Expand All @@ -9,7 +9,10 @@ help:
"- build: Build and install pyk4a package\n" \
"- fmt: Format all code\n" \
"- lint: Lint code syntax and formatting\n" \
"- test: Run tests"
"- test: Run tests\n"\
"- test-hardware: Run tests related from connected kinect"
"- test-no-hardware: Run tests without connected kinect"


setup:
pip install -r requirements-dev.txt
Expand All @@ -27,4 +30,10 @@ lint:
mypy $(SOURCES)

test:
pytest --cov=pyk4a
pytest --cov=pyk4a --verbose $(TESTS)

test-hardware:
pytest --cov=pyk4a -m "device" --verbose $(TESTS)

test-no-hardware:
pytest --cov=pyk4a -m "not device" --verbose $(TESTS)
159 changes: 159 additions & 0 deletions example/bench.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
from argparse import Action, ArgumentParser, Namespace
from enum import Enum
from time import monotonic

from pyk4a import FPS, ColorResolution, Config, DepthMode, ImageFormat, PyK4A, WiredSyncMode


class EnumAction(Action):
"""
Argparse action for handling Enums
"""

def __init__(self, **kwargs):
# Pop off the type value
enum = kwargs.pop("type", None)

# Ensure an Enum subclass is provided
if enum is None:
raise ValueError("type must be assigned an Enum when using EnumAction")
if not issubclass(enum, Enum):
raise TypeError("type must be an Enum when using EnumAction")

# Generate choices from the Enum
kwargs.setdefault("choices", tuple(e.name for e in enum))

super(EnumAction, self).__init__(**kwargs)

self._enum = enum

def __call__(self, parser, namespace, values, option_string=None):
# Convert value back into an Enum
setattr(namespace, self.dest, self._enum(values))


class EnumActionTuned(Action):
"""
Argparse action for handling Enums
"""

def __init__(self, **kwargs):
# Pop off the type value
enum = kwargs.pop("type", None)

# Ensure an Enum subclass is provided
if enum is None:
raise ValueError("type must be assigned an Enum when using EnumAction")
if not issubclass(enum, Enum):
raise TypeError("type must be an Enum when using EnumAction")

# Generate choices from the Enum
kwargs.setdefault("choices", tuple(e.name.split("_")[-1] for e in enum))

super(EnumActionTuned, self).__init__(**kwargs)

self._enum = enum

def __call__(self, parser, namespace, values, option_string=None):
# Convert value back into an Enum
items = {item.name.split("_")[-1]: item.value for item in self._enum}
setattr(namespace, self.dest, self._enum(items[values]))


def parse_args() -> Namespace:
parser = ArgumentParser(
description="Bench camera captures transfer speed. \n"
"You can check if you USB controller/cable has enough performance."
)
parser.add_argument("--device-id", type=int, default=0, help="Device ID, from zero. Default: 0")
parser.add_argument(
"--color-resolution",
type=ColorResolution,
action=EnumActionTuned,
default=ColorResolution.RES_720P,
help="Color sensor resoultion. Default: 720P",
)
parser.add_argument(
"--color-color_format",
type=ImageFormat,
action=EnumActionTuned,
default=ImageFormat.COLOR_BGRA32,
help="Color color_image color_format. Default: BGRA32",
)
parser.add_argument(
"--depth-mode",
type=DepthMode,
action=EnumAction,
default=DepthMode.NFOV_UNBINNED,
help="Depth sensor mode. Default: NFOV_UNBINNED",
)
parser.add_argument(
"--camera-fps", type=FPS, action=EnumActionTuned, default=FPS.FPS_30, help="Camera FPS. Default: 30"
)
parser.add_argument(
"--synchronized-images-only",
action="store_true",
dest="synchronized_images_only",
help="Only synchronized color and depth images, default",
)
parser.add_argument(
"--no-synchronized-images",
action="store_false",
dest="synchronized_images_only",
help="Color and Depth images can be non synced.",
)
parser.set_defaults(synchronized_images_only=True)
parser.add_argument(
"--wired-sync-mode",
type=WiredSyncMode,
action=EnumActionTuned,
default=WiredSyncMode.STANDALONE,
help="Wired sync mode. Default: STANDALONE",
)
return parser.parse_args()


def bench(config: Config, device_id: int):
device = PyK4A(config=config, device_id=device_id)
device.connect()
depth = color = depth_period = color_period = 0
print("Press CTRL-C top stop benchmark")
started_at = started_at_period = monotonic()
while True:
try:
capture = device.get_capture()
if capture.color is not None:
color += 1
color_period += 1
if capture.depth is not None:
depth += 1
depth_period += 1
elapsed_period = monotonic() - started_at_period
if elapsed_period >= 2:
print(
f"Color: {color_period / elapsed_period:0.2f} FPS, Depth: {depth_period / elapsed_period: 0.2f} FPS"
)
color_period = depth_period = 0
started_at_period = monotonic()
except KeyboardInterrupt:
break
elapsed = monotonic() - started_at
device.disconnect()
print()
print(f"Result: Color: {color / elapsed:0.2f} FPS, Depth: {depth / elapsed: 0.2f} FPS")


def main():
args = parse_args()
config = Config(
color_resolution=args.color_resolution,
color_format=args.color_format,
depth_mode=args.depth_mode,
synchronized_images_only=args.synchronized_images_only,
wired_sync_mode=args.wired_sync_mode,
)
bench(config, args.device_id)


if __name__ == "__main__":
main()
26 changes: 13 additions & 13 deletions example/color_formats.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,27 +25,27 @@ def get_color_image_size(config, imshow=True):

def convert_to_bgra_if_required(k4a, img_color):
# examples for all possible pyk4a.ColorFormats
if k4a._config.color_format == pyk4a.ColorFormat.MJPG:
if k4a._config.color_format == pyk4a.ImageFormat.COLOR_MJPG:
img_color = cv2.imdecode(img_color, cv2.IMREAD_COLOR)
elif k4a._config.color_format == pyk4a.ColorFormat.NV12:
elif k4a._config.color_format == pyk4a.ImageFormat.COLOR_NV12:
img_color = cv2.cvtColor(img_color, cv2.COLOR_YUV2BGRA_NV12)
# this also works and it explains how the NV12 color format is stored in memory
# h, w = img_color.shape[0:2]
# this also works and it explains how the COLOR_NV12 color color_format is stored in memory
# h, w = color_image.shape[0:2]
# h = h // 3 * 2
# luminance = img_color[:h]
# chroma = img_color[h:, :w//2]
# img_color = cv2.cvtColorTwoPlane(luminance, chroma, cv2.COLOR_YUV2BGRA_NV12)
elif k4a._config.color_format == pyk4a.ColorFormat.YUY2:
# luminance = color_image[:h]
# chroma = color_image[h:, :w//2]
# color_image = cv2.cvtColorTwoPlane(luminance, chroma, cv2.COLOR_YUV2BGRA_NV12)
elif k4a._config.color_format == pyk4a.ImageFormat.COLOR_YUY2:
img_color = cv2.cvtColor(img_color, cv2.COLOR_YUV2BGRA_YUY2)
return img_color


if __name__ == "__main__":
imshow = True
config_BGRA32 = Config(color_format=pyk4a.ColorFormat.BGRA32)
config_MJPG = Config(color_format=pyk4a.ColorFormat.MJPG)
config_NV12 = Config(color_format=pyk4a.ColorFormat.NV12)
config_YUY2 = Config(color_format=pyk4a.ColorFormat.YUY2)
config_BGRA32 = Config(color_format=pyk4a.ImageFormat.COLOR_BGRA32)
config_MJPG = Config(color_format=pyk4a.ImageFormat.COLOR_MJPG)
config_NV12 = Config(color_format=pyk4a.ImageFormat.COLOR_NV12)
config_YUY2 = Config(color_format=pyk4a.ImageFormat.COLOR_YUY2)

nbytes_BGRA32 = get_color_image_size(config_BGRA32, imshow=imshow)
nbytes_MJPG = get_color_image_size(config_MJPG, imshow=imshow)
Expand All @@ -57,4 +57,4 @@ def convert_to_bgra_if_required(k4a, img_color):

# output:
# nbytes_BGRA32=3686400 nbytes_MJPG=229693
# BGRA32 is 16.04924834452944 larger
# COLOR_BGRA32 is 16.04924834452944 larger
58 changes: 52 additions & 6 deletions example/playback.py
Original file line number Diff line number Diff line change
@@ -1,16 +1,61 @@
from argparse import ArgumentParser
from json import dumps, loads
from typing import Optional, Tuple

from pyk4a import PyK4APlayback
import cv2
import numpy as np

from pyk4a import ImageFormat, PyK4APlayback


def colorize(
image: np.ndarray,
clipping_range: Tuple[Optional[int], Optional[int]] = (None, None),
colormap: int = cv2.COLORMAP_HSV,
) -> np.ndarray:
if clipping_range[0] or clipping_range[1]:
img = image.clip(clipping_range[0], clipping_range[1])
else:
img = image.copy()
img = cv2.normalize(img, None, 0, 255, cv2.NORM_MINMAX, dtype=cv2.CV_8U)
img = cv2.applyColorMap(img, colormap)
return img


def convert_to_bgra_if_required(color_format, color_image):
# examples for all possible pyk4a.ColorFormats
if color_format == ImageFormat.COLOR_MJPG:
color_image = cv2.imdecode(color_image, cv2.IMREAD_COLOR)
elif color_format == ImageFormat.COLOR_NV12:
color_image = cv2.cvtColor(color_image, cv2.COLOR_YUV2BGRA_NV12)
# this also works and it explains how the COLOR_NV12 color color_format is stored in memory
# h, w = color_image.shape[0:2]
# h = h // 3 * 2
# luminance = color_image[:h]
# chroma = color_image[h:, :w//2]
# color_image = cv2.cvtColorTwoPlane(luminance, chroma, cv2.COLOR_YUV2BGRA_NV12)
elif color_format == ImageFormat.COLOR_YUY2:
color_image = cv2.cvtColor(color_image, cv2.COLOR_YUV2BGRA_YUY2)
return color_image


def info(playback: PyK4APlayback):
print(f"Record length: {playback.length / 1000000: 0.2f} sec")

calibration_str = playback.calibration_json
calibration_formatted = dumps(loads(calibration_str), indent=2)
print("=== Calibration ===")
print(calibration_formatted)

def play(playback: PyK4APlayback):
while True:
try:
capture = playback.get_next_capture()
if capture.color is not None:
cv2.imshow("Color", convert_to_bgra_if_required(playback.configuration["color_format"], capture.color))
if capture.depth is not None:
cv2.imshow("Depth", colorize(capture.depth, (None, 5000)))
key = cv2.waitKey(10)
if key != -1:
break
except EOFError:
break
cv2.destroyAllWindows()


def main() -> None:
Expand All @@ -29,6 +74,7 @@ def main() -> None:

if offset != 0.0:
playback.seek(int(offset * 1000000))
play(playback)

playback.close()

Expand Down
6 changes: 2 additions & 4 deletions example/viewer_depth.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@
def main():
k4a = PyK4A(
Config(
color_resolution=pyk4a.ColorResolution.RES_720P,
color_resolution=pyk4a.ColorResolution.OFF,
depth_mode=pyk4a.DepthMode.NFOV_UNBINNED,
synchronized_images_only=True,
synchronized_images_only=False,
)
)
k4a.connect()
Expand All @@ -31,8 +31,6 @@ def main():
# coloring image by choosed color map
colored_depth = cv2.applyColorMap(normalized_depth, cv2.COLORMAP_HSV)
cv2.imshow("k4a", colored_depth)
# cv2.imshow('k4a', capture.ir)
# cv2.imshow('k4a', capture.color)
key = cv2.waitKey(10)
if key != -1:
cv2.destroyAllWindows()
Expand Down
50 changes: 50 additions & 0 deletions example/viewer_transformation.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
from typing import Optional, Tuple

import cv2
import numpy as np

import pyk4a
from pyk4a import Config, PyK4A


def colorize(
image: np.ndarray,
clipping_range: Tuple[Optional[int], Optional[int]] = (None, None),
colormap: int = cv2.COLORMAP_HSV,
) -> np.ndarray:
if clipping_range[0] or clipping_range[1]:
img = image.clip(clipping_range[0], clipping_range[1])
else:
img = image.copy()
img = cv2.normalize(img, None, 0, 255, cv2.NORM_MINMAX, dtype=cv2.CV_8U)
img = cv2.applyColorMap(img, colormap)
return img


def main():
k4a = PyK4A(Config(color_resolution=pyk4a.ColorResolution.RES_720P, depth_mode=pyk4a.DepthMode.NFOV_UNBINNED,))
k4a.connect()

while True:
capture = k4a.get_capture()
if capture.depth is not None:
cv2.imshow("Depth", colorize(capture.depth, (None, 5000)))
if capture.ir is not None:
cv2.imshow("IR", colorize(capture.ir, (None, 500), colormap=cv2.COLORMAP_JET))
if capture.color is not None:
cv2.imshow("Color", capture.color)
if capture.transformed_depth is not None:
cv2.imshow("Transformed Depth", colorize(capture.transformed_depth, (None, 5000)))
if capture.transformed_color is not None:
cv2.imshow("Transformed Color", capture.transformed_color)

key = cv2.waitKey(10)
if key != -1:
cv2.destroyAllWindows()
break

k4a.disconnect()


if __name__ == "__main__":
main()
Loading