You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, thank you for providing the complete code. We're highly interested in your face swapping work and have been attempting to reproduce it. During our implementation process, we encountered the following issues and would greatly appreciate your guidance:
1、Face Swapping Results
For the example images provided in the source code, I used crop_and_mask.py to perform alignment, cropping, and mask generation (with mask_real_path set to None). Then input the aligned images and their corresponding masks into inference_test_bench.py to generate face-swapped results. However, the output differs from the results shown in your paper. Could you clarify if I missed any important steps or if there are errors in my workflow? As shown in the figure, the second and fourth columns display the visualized mask results after applying crop_and_mask.
2、Applicability of crop_and_mask
Is crop_and_mask universally applicable to all face datasets without provided masks?
Since the above results did not meet expectations, I further tested the crop_and_mask pipeline by generating masks for images from CelebA-HQ. The resulting masks were inconsistent with those produced by process_CelebA_mask. Do you have any comments on this discrepancy? The figure below illustrates the differences between the masks generated by the two methods. Note that I enhanced the color contrast for better visualization, but the actual masks were generated strictly following the source code's workflow.
The text was updated successfully, but these errors were encountered:
Uh oh!
There was an error while loading. Please reload this page.
First, thank you for providing the complete code. We're highly interested in your face swapping work and have been attempting to reproduce it. During our implementation process, we encountered the following issues and would greatly appreciate your guidance:
1、Face Swapping Results
For the example images provided in the source code, I used crop_and_mask.py to perform alignment, cropping, and mask generation (with mask_real_path set to None). Then input the aligned images and their corresponding masks into inference_test_bench.py to generate face-swapped results. However, the output differs from the results shown in your paper. Could you clarify if I missed any important steps or if there are errors in my workflow? As shown in the figure, the second and fourth columns display the visualized mask results after applying crop_and_mask.
2、Applicability of crop_and_mask
Is crop_and_mask universally applicable to all face datasets without provided masks?
Since the above results did not meet expectations, I further tested the crop_and_mask pipeline by generating masks for images from CelebA-HQ. The resulting masks were inconsistent with those produced by process_CelebA_mask. Do you have any comments on this discrepancy? The figure below illustrates the differences between the masks generated by the two methods. Note that I enhanced the color contrast for better visualization, but the actual masks were generated strictly following the source code's workflow.
The text was updated successfully, but these errors were encountered: