-
Notifications
You must be signed in to change notification settings - Fork 2.6k
[core] Fix issue reported by Coverity in core sub-component #30753
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[core] Fix issue reported by Coverity in core sub-component #30753
Conversation
Signed-off-by: Pawel Raasz <[email protected]>
Signed-off-by: Pawel Raasz <[email protected]>
Signed-off-by: Pawel Raasz <[email protected]>
Co-authored-by: Tomasz Jankowski <[email protected]>
Signed-off-by: Pawel Raasz <[email protected]>
Signed-off-by: Pawel Raasz <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM, some minor suggestions
@@ -38,10 +38,10 @@ void GroupQueryAttention::validate_and_infer_types() { | |||
const auto& batch_size = q_shape[0]; | |||
const auto& sequence_len = q_shape[2]; | |||
const auto& head_size = q_shape[3]; | |||
const auto& past_sequence_len = get_input_partial_shape(3)[2]; | |||
auto kv_shape = PartialShape{batch_size, m_kv_num_heads, get_input_partial_shape(3)[2], head_size}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably out of scope of this PR, but it's not ensured here that the rank of the shape is static and contains enough dimension to safely access element at index 2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes is out of scope for this change it was just Coverity issue fix, lets check is separately.
@@ -52,8 +52,9 @@ void GroupQueryAttention::validate_and_infer_types() { | |||
"GroupQueryAttention only suuports f32 and f16"); | |||
|
|||
set_output_type(0, element_type, PartialShape{batch_size, sequence_len, head_size * m_num_heads}); | |||
set_output_type(1, element_type, PartialShape{batch_size, m_kv_num_heads, output_kv_len, head_size}); | |||
set_output_type(2, element_type, PartialShape{batch_size, m_kv_num_heads, output_kv_len, head_size}); | |||
for (auto&& port : {1, 2}) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't auto&&
an overkill for the primitive int type here? ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will auto deduce the type, this is not r-value but it can be.
if (auto& res_tensor = result->get_input_tensor(0); &res_tensor != &node.get_tensor()) { | ||
// If result input tensor is not the same as node tensor, then we need to move names | ||
// from result input tensor to node tensor | ||
node.get_tensor().set_names(res_tensor.get_names()); | ||
res_tensor.set_names({}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the check &res_tensor != &node.get_tensor()
has been added, would it be possible to cover it with unit test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Current tests fail without it, as there is no additional copy of tensor names and set empty names just clear names.
Co-authored-by: Katarzyna Mitrus <[email protected]>
Details:
Tickets: