8000 StridedSlice Op doesn't support 5-dimension tensors. · Issue #213 · VeriSilicon/TIM-VX · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

StridedSlice Op doesn't support 5-dimension tensors. #213

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Nullkooland opened this issue Nov 10, 2021 · 2 comments
Closed

StridedSlice Op doesn't support 5-dimension tensors. #213

Nullkooland opened this issue Nov 10, 2021 · 2 comments
Labels
bug Something isn't working

Comments

@Nullkooland
Copy link
Contributor

I'm trying to use the StridedSlice Op to handle the slice operation in the YOLOv5 post-processing.

image

It involves the tensor of 5 dimensions, however, it seems that StridedSlice Op can only support up to 4 dimensions.

Here is some example code:

#include <algorithm>
#include <array>
#include <cstdio>
#include <memory>
#include <vector>

#include <tim/vx/context.h>
#include <tim/vx/graph.h>
#include <tim/vx/operation.h>
#include <tim/vx/ops/elementwise.h>
#include <tim/vx/ops/stridedslice.h>

/* Using tensors of 5 dimensions. */
static constexpr std::array<int, 5> BEGIN = {0, 0, 0, 0, 2};
static constexpr std::array<int, 5> END = {1, 3, 10, 10, 4};
static constexpr std::array<int, 5> STRIDES = {1, 1, 1, 1, 1};

static constexpr int MASK_BEGIN = 0b11110;
static constexpr int MASK_END = 0b11110;
static constexpr int MASK_SHRINK = 0b00000;

static constexpr std::array<size_t, 5> SHAPE_INPUT = {1, 3, 10, 10, 85};
static constexpr std::array<size_t, 5> SHAPE_OUTPUT = {1, 3, 10, 10, 2};
static constexpr size_t SLICE_AXIS = 4;

/* Using tensors of 4 dimensions. */
// static constexpr std::array<int, 4> BEGIN = {0, 0, 0, 2};
// static constexpr std::array<int, 4> END = {3, 10, 10, 4};
// static constexpr std::array<int, 4> STRIDES = {1, 1, 1, 1};

// static constexpr int MASK_BEGIN = 0b1110;
// static constexpr int MASK_END = 0b1110;
// static constexpr int MASK_SHRINK = 0b0000;

// static constexpr std::array<size_t, 4> SHAPE_INPUT = {3, 10, 10, 85};
// static constexpr std::array<size_t, 4> SHAPE_OUTPUT = {3, 10, 10, 2};
// static constexpr size_t SLICE_AXIS = 3;

static constexpr size_t LEN_DETECTION_FULL = 85;
static constexpr size_t LEN_DETECTION_SLICED = 2;
static constexpr size_t NUM_ELEMENTS_INPUT = 25500; // 1 * 3 * 10 * 10 * 85
static constexpr size_t NUM_ELEMENTS_OUTPUT = 600;  // 1 * 3 * 10 * 10 * 2
static constexpr size_t NUM_DETECTIONS = 300;       // 1 * 3 * 10 * 10

int main(int argc, char* argv[]) {
    auto context = tim::vx::Context::Create();
    auto graph = context->CreateGraph();

    tim::vx::ShapeType vxShapeInput;
    tim::vx::ShapeType vxShapeOutput;

    std::reverse_copy(SHAPE_INPUT.cbegin(),
                      SHAPE_INPUT.cend(),
                      std::back_inserter(vxShapeInput));
    std::reverse_copy(SHAPE_OUTPUT.cbegin(),
                      SHAPE_OUTPUT.cend(),
                      std::back_inserter(vxShapeOutput));

    // Create TIM-VX tensors.
    auto specInput = tim::vx::TensorSpec(tim::vx::DataType::FLOAT32,
                                         vxShapeInput,
                                         tim::vx::TensorAttribute::INPUT);

    auto specOutput = tim::vx::TensorSpec(tim::vx::DataType::FLOAT32,
                                          vxShapeOutput,
                                          tim::vx::TensorAttribute::OUTPUT);

    auto tensorInput = graph->CreateTensor(specInput);
    auto tensorOutput = graph->CreateTensor(specOutput);

    std::vector<int> begin;
    std::vector<int> end;
    std::vector<int> strides;

    std::reverse_copy(BEGIN.cbegin(), BEGIN.cend(), std::back_inserter(begin));
    std::reverse_copy(END.cbegin(), END.cend(), std::back_inserter(end));
    std::reverse_copy(
        STRIDES.cbegin(), STRIDES.cend(), std::back_inserter(strides));

    // Create StridedSlice Op.
    /* input: [1, 3, 10, 10, 85] -> slice(range=[..., 2:4], stride=1) -> output:
     * [1, 3, 10, 10, 2] */
    auto opStridedSlice = graph->CreateOperation<tim::vx::ops::StridedSlice>(
        begin, end, strides, MASK_BEGIN, MASK_END, MASK_SHRINK);

    opStridedSlice->BindInput(tensorInput);
    opStridedSlice->BindOutput(tensorOutput);

    // Compile graph.
    bool ret = false;
    ret = graph->Compile();
    if (!ret) {
        std::exit(1);
    }

    std::array<float, NUM_ELEMENTS_INPUT> bufferInput;
    std::array<float, NUM_ELEMENTS_OUTPUT> bufferOutput;

    // Prepare input tensor data.
    bufferInput.fill(0.0F);
    for (size_t k = 0; k < NUM_DETECTIONS; k++) {
        float* dataPtr = bufferInput.data() + k * LEN_DETECTION_FULL;
        for (size_t i = BEGIN[SLICE_AXIS]; i < END[SLICE_AXIS]; i++) {
            dataPtr[i] = static_cast<float>(i);
        }
    }

    // Run graph.
    ret = tensorInput->CopyDataToTensor(bufferInput.data());
    ret = graph->Run();
    ret = tensorOutput->CopyDataFromTensor(bufferOutput.data());

    // Print output tensor data.
    for (size_t k = 0; k < NUM_DETECTIONS; k++) {
        const float* dataPtr = bufferOutput.data() + k * LEN_DETECTION_SLICED;
        for (size_t i = 0; i < LEN_DETECTION_SLICED; i++) {
            std::printf("%.1F, ",
                        dataPtr[i]); // Expected to be [begin, end-1] per line.
        }
        std::printf("\n");
    }

    return static_cast<int>(!ret);
}

If I use 4-dimension tensors, it runs as expected, but when I use 5-dimension tensors, it failed to compile the graph and gives me the error:

Failed to initialize Kernel "vivante.nn.tensor.stride_slice" of Node 0x55555564c0e0 (status = -8)
@sunshinemyson
Copy link
Contributor

@Goose-Bomb ,

Currently, we can not support 5D tensor with stridedslice, this could be fixed at the end of Dec.

@sunshinemyson sunshinemyson added the bug Something isn't working label Nov 16, 2021
thezha pushed a commit that referenced this issue Jan 10, 2022
test case can be found
    #213

Signed-off-by: xiang.zhang <xiang.zhang@verisilicon.com>
@sunshinemyson
Copy link
Contributor

Solved in v1.1.37

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants
0