No Active Events

Create notebooks and keep track of their status here.


Kaggle uses cookies from Google to deliver and enhance the quality of its services and to analyze traffic.
Learn more.
Ok, Got it.
SenNet + HOA · Research Code Competition · a month to go

SenNet + HOA - Hacking the Human Vasculature in 3D

Segment vasculature in 3D scans of human kidney

SenNet + HOA - Hacking the Human Vasculature in 3D

hengck23 · 7th in this Competition · Posted a month ago
This post earned a gold medal

[lb0.862] experiment results, hopefully open gold solution till 21-jan-2024

more to come later ‎‏…. ㅤㅤㅤㅤㅤㅤ‎‏‏‎ ‎‏‏‎ ‎‏‎ ‎‏‏‎ ㅤ ㅤㅤㅤㅤㅤ‎‏‏‎ ‎‏‏‎ ‎‏‎ ‎‏‏‎ ‎‏

🦟 nov-21:

🦟 nov-28:

  • advanced baseline with more augmentation and 3d post-processing ㅤㅤㅤ‎‏‏‎ ‎‏‏‎ ‎‏‎ ‎‏‏‎ ㅤ
    https://www.kaggle.com/code/hengck23/lb0-808-resnet50-cc3d2d-unet-xy-zy-zx-cc3d
  • this concluded my one-week feasibility study:
    • estimated upperbound: lb 0.93
    • identified key problems: miss (broken large vessels, missing small vessels), fp (noise, sometimes out of kidney volume)
    • solutions: multiscale and 3d model (i think 3d transformer/hybrid/cnn should work well, need long range attention)

i am obviously? overfitting the public LB data …


here is a visualization of the target vessel we are segmenting:

obviously, 3d segmentation will get better results. you probably can't just rely on deep net, and need some very smart 3d image post processing to win.

87 Comments

hengck23

Topic Author

Posted 4 days ago

· 7th in this Competition

This post earned a bronze medal

20-dec:

recepie for LB 0.848

  • use transformer (e.g. nextVIT base)
  • use train = kidney1/dense + kidney3/dense
  • use validate = kidney2
  • just normal unet2d (with a additional encoder layer for H,W and H/2,w/2)
  • infer with the usual xy,zx,zy + 5 TTA

details at the other posts below ….


i am surprised that it can get LB 0.862 by training for long iterations (but CV does not reflect the change … maybe because of wrong annotation on kidney2 and the segmentation sparsity (i.e. incomplete label)

i.e. there is probably no reliable local validation set in this competition

Posted 4 days ago

maybe i missed it, but any reason just normal unet2d would be suffice? any thoughts on 3D approach?

Posted 4 days ago

· 2nd in this Competition

This post earned a bronze medal

How long does it take for you to inference with Transformer, it seems that P100 without fp16 inference is very slow for Transformer, do you have any suggestions for this?

hengck23

Topic Author

Posted 4 days ago

· 7th in this Competition

you can google for fast transformer. i use nextvit, which was used in kaggle rnsa breast mamnographs before. it takes 3to4 hr and can be speedup further by tensort rt to 3 hr

Posted 4 days ago

· 2nd in this Competition

Thank you so much!

hengck23

Topic Author

Posted 4 days ago

· 7th in this Competition

update: nextVIT-base can get LB0.682 3hr 20 min (just complete in few minutes along so i can get the timing in minutes).

it uses 5xTTA + 3 axis (xy,yz,xz) in P100

Posted 4 days ago

· 12th in this Competition

Try fp16 + 2xT4 + nn.DataParallel, for me it's faster than using single P100

The model I choose is very slow [maxvit-small],
and scoring single checkpoint with xy/yz/xz + 5tta takes 8 hours…

should try Next-ViT I think :D

hengck23

Topic Author

Posted 4 days ago

· 7th in this Competition

the fastest i can find

hengck23

Topic Author

Posted 7 days ago

· 7th in this Competition

This post earned a bronze medal

correct way to do validation:

  • just validate on kidney3 is not enough.
  • another test (just test only, don't use it to true hyper-parameters) on kidney2 is probably more accurate
  • low fp seems to be the key for good LB score. for me fp=0.05 is optimal for LB.
  • transformer seems to be a "much" better model (maybe more rubust against salt noise,etc see segformer paper)

NOTE:

  • all models below have local CV of 0.90 for kidney3 (dense). But when it comes to kidney2, results are different, as hsown in the table below.
  • all models are trained on kidney1 (dense) only
  • top upper half of kidney2 has annotation error (shift) … see anotehr post below

hengck23

Topic Author

Posted 7 days ago

· 7th in this Competition

This post earned a bronze medal

another tip:
my previous two winning kaggle solution has huge shake up. I could win because i kept to the follwing principles. instead of the best parameter (e.g. threshold) for the solution:

  • try to improve your solution so that it is not sensitive to thresholds, i.e. performance is optimal over a range of threshold. e.g. best AP verus MAP
  • try to improve your solution so that it is not sensitive to data, i.e. there is low variance in accuracy for different validation samples

in summary, in addition to best results, think of ways to

  • measure variance
  • reduce variance

hengck23

Topic Author

Posted 11 days ago

· 7th in this Competition

This post earned a bronze medal

just an idea:

takes 2 nearby slices and generate vessels in between, both image and mask

hengck23

Topic Author

Posted 11 days ago

· 7th in this Competition

hengck23

Topic Author

Posted 14 days ago

· 7th in this Competition

This post earned a bronze medal

https://www.kaggle.com/code/junkoda/fast-surface-dice-computation

based on the fast surface dice computation by @junkoda, i did an extensive study on the effect of threshold and the local lb metric. For each model after the training epoch, i made measurements.

conclusion:

  • TTA and TTA+xy,zx,zy always improve surface-dice, even if the model is under or over-fitted.
  • there are two optimum:
    • early stopping (high threshold >0.5)
    • just before over fitted (0.2 to 0.3 threshold)
  • BCE loss seems not suitable. both train and validation loss are decreasing steadily. But surface-dice are moving up and down. One may want to take a look at e.g. edge loss, Hausdorff Distance loss, etc

Posted 14 days ago

· 10th in this Competition

I'm using a combination of 0.5Dice loss and 0.5BCE loss. The loss is continuously decreasing, but it seems that the surface dice and LB are also increasing. Perhaps Dice loss is more suitable?

Posted 14 days ago

· 8th in this Competition

Is the model trained on xy xz yz or only inferred on xy xz yz?

hengck23

Topic Author

Posted 14 days ago

· 7th in this Competition

This post earned a bronze medal

model is trained in xy xz yz

Posted 9 days ago

Hello, may I ask how you do multi-view training during the training process, is it cycling three axes training in one epoch? I put the data of the three views into a dataset, set the batchsize to 1, and shuffle it, but it doesn't seem to work well. Could you please make a reply? Thank you very much

hengck23

Topic Author

Posted 9 days ago

· 7th in this Competition

This post earned a bronze medal
def make_train_id(
    meta_data=DATA_META,
    name = ['kidney_1_dense',],
):
    train_id=[]
    for n in name:
        d = meta_data[n]
        D,H,W = d.image.shape
        train_id += [ (d.name, i, 'y') for i in range(H) ]
        train_id += [ (d.name, i, 'z') for i in range(D) ]
        train_id += [ (d.name, i, 'x') for i in range(W) ]
    return train_id

train_id = make_train_id(...)


class HiPDataset(Dataset):
    def __init__(self, sample_id=train_id, augment=None):
        self.sample_id = sample_id
        self.augment = augment
        self.length = len(self.sample_id)

        unique_name=[]
        for name,i,axis in sample_id:
            if not name in unique_name:
                unique_name.append(name)
        self.unique_name=sorted(unique_name)

    def __str__(self):
        string = ''
        string += f'\tlen = {len(self)}\n'
        string += f'\tunique_name = {self.unique_name}\n'
        return string

    def __len__(self):
        return self.length

    def __getitem__(self, index):
        name,i,axis = self.sample_id[index]

        d = DATA_META[name]
        D,H,W = d.image.shape

        if axis == 'z':
            image = d.image[i]
            vessel = d.vessel[i]
        if axis == 'y':
            image = d.image[:, i]
            vessel = d.vessel[:, i]
        if axis == 'x':
            image = d.image[:, :, i]
            vessel = d.vessel[:, :, i]

        image  = np.ascontiguousarray(image)
        vessel = np.ascontiguousarray(vessel)

        if self.augment is not None:
            image, vessel = self.augment(image, vessel)

        image  = np.ascontiguousarray(image)
        vessel = np.ascontiguousarray(vessel)
        #---

        r = {}
        r['index'] = index
        r['sample_id' ] = (name,i,axis)
        r['image' ] = torch.from_numpy(image).float()
        r['vessel'] = torch.from_numpy(vessel).float()
        return r

Posted 9 days ago

This seems similar to doing a three-view dataset offline, whether you used a batchsize of 1 or padded the different viewing images to the same size.

hengck23

Topic Author

Posted 15 days ago

· 7th in this Competition

This post earned a bronze medal

example of simple 3d flood fill

hengck23

Topic Author

Posted 15 days ago

· 7th in this Competition

This post earned a bronze medal
for iteration in range(100):
 # repeat grow up, grow down,   grow up, grow down, ....
 for t in range(750,0,-1): 
    print(t)
    prev_mask  = dense[t+1]
    prev_image = image[t+1]
    curr_image = image[t]

    m = curr_image[1:-1,1:-1]
    prev = prev_mask*prev_image

    th=2
    diff = 0
    diff += np.abs(m-prev[1:-1,1:-1])<th #[ 0, 0]
    diff += np.abs(m-prev[2:  ,1:-1])<th #[ 1, 0]
    diff += np.abs(m-prev[0:-2,1:-1])<th #[-1, 0]
    diff += np.abs(m-prev[1:-1,0:-2])<th #[ 0,-1]
    diff += np.abs(m-prev[2:  ,0:-2])<th #[ 1,-1]
    diff += np.abs(m-prev[0:-2,2:  ])<th #[-1,-1]
    diff += np.abs(m-prev[1:-1,2:  ])<th #[ 0, 1]
    diff += np.abs(m-prev[2:  ,2:  ])<th #[ 1, 1]
    diff += np.abs(m-prev[0:-2,2:  ])<th #[-1, 1]

    grow = diff>1
    #todo: grow across plane here ...

    predict += grow
    #image_show_norm('predict',predict)
    image_show_norm('predict',(predict>0).astype(np.float32))
    cv2.waitKey(0)

hengck23

Topic Author

Posted 13 days ago

· 7th in this Competition

high resolution 2d/3d unet solution !!!

demo notebook :https://www.kaggle.com/code/hengck23/2d-to-3d-unet-demo


hengck23

Topic Author

Posted 13 days ago

· 7th in this Competition

how to properly design 3d solution.

if you cannot fit a single 3d net work (because of memory GPU constraint),
you properly need at least 2 net (of different hierarchy scale).

hengck23

Topic Author

Posted 13 days ago

· 7th in this Competition

This post earned a bronze medal

[1] Memory transformers for full context and high-resolution 3D Medical Segmentation
https://arxiv.org/pdf/2210.05313.pdf

"Combined, they allow full attention over high resolution images, e.g. 512 x 512 x 256 voxels and above. Experiments on the BCV image segmentation dataset shows better performances than state-of-the-art CNN and transformer baselines, …."

hengck23

Topic Author

Posted 15 days ago

· 7th in this Competition

This post earned a bronze medal

trick to get lb 0.835

hengck23

Topic Author

Posted 15 days ago

· 7th in this Competition

This post earned a bronze medal
    def forward(self, batch):
        x = batch['image']
        x = x.expand(-1,3,-1,-1) 
        B, C, H, W = x.shape

        encode = []
        xx = self.stem0(x); encode.append(xx)
        xx = F.avg_pool2d(xx,kernel_size=2,stride=2)
        xx = self.stem1(xx); encode.append(xx)

        e = self.encoder #convnext
        x = e.stem(x);

        x = e.stages[0](x); encode.append(x)
        x = e.stages[1](x); encode.append(x)
        x = e.stages[2](x); encode.append(x)
        x = e.stages[3](x); encode.append(x)
        ##[print(f'encode_{i}', e.shape) for i,e in enumerate(encode)]

        last, decode = self.decoder(
            feature=encode[-1], skip=encode[:-1][::-1]
        )
        ##[print(f'decode_{i}', e.shape) for i,e in enumerate(decode)]
        ##print('last', last.shape)

        vessel = self.vessel(last)

hengck23

Topic Author

Posted 15 days ago

· 7th in this Competition

you can go to super-resolution

HINT:
alternatively, you can train a real super-resolution net (you have 20um imags from the HIP-CT website)

hengck23

Topic Author

Posted 15 days ago

· 7th in this Competition

local CV/LB

hengck23

Topic Author

Posted 14 days ago

· 7th in this Competition

This post earned a bronze medal

why you need self-supervised learning

_50um_LADAF-2020-31_kidney : 1644x1108,2141

hengck23

Topic Author

Posted 14 days ago

· 7th in this Competition

HINT: it is easier to check if this is kidney 5 or 6

hengck23

Topic Author

Posted 13 days ago

· 7th in this Competition


quite noisy

hengck23

Topic Author

Posted 18 days ago

· 7th in this Competition

This post earned a bronze medal

lb0.829 …

normalisation is the key

def norm_by_percentile(volume, low=10, high=99.8, alpha=0.01):
    xmin = np.percentile(volume,low)
    xmax = np.percentile(volume,high)
    x = (volume-xmin)/(xmax-xmin)
    if 1:
        x[x>1]=(x[x>1]-1)*alpha +1
        x[x<0]=(x[x<0])*alpha
    #x = np.clip(x,0,1)
    return x

normalised by volume/subvolume, not image

Posted 16 days ago

· 31st in this Competition

@hengck23 - is this the right way to read 3D volume because i get negative stride error

class BuildDataset(torch.utils.data.Dataset):
def __init__(self, img_voxel_paths, msk_voxel_paths=[], transforms=None):
    self.img_voxel = create_3d_voxel(img_voxel_paths)  # Load the 3D voxel for images
    self.msk_voxel = create_3d_voxel_msk(msk_voxel_paths) if msk_voxel_paths else None  # Load the 3D voxel for masks
    self.transforms = transforms

def __len__(self):
    return self.img_voxel.shape[0]  # Number of slices in the voxel

def __getitem__(self, index):
    img = self.img_voxel[index, :, :].copy()  # Extract the 2D slice from the image voxel
    img = np.expand_dims(img, axis=0)  # Add channel dimension

    if self.msk_voxel is not None:
        msk = self.msk_voxel[index, :, :].copy()  # Extract the 2D slice from the mask voxel
        msk = np.expand_dims(msk, axis=0)  # Add channel dimension

        if self.transforms:
            data = self.transforms(image=img, mask=msk)
            img = data['image']
            msk = data['mask']

        return torch.tensor(img, dtype=torch.float32), torch.tensor(msk, dtype=torch.float32)
    else:
        if self.transforms:
            data = self.transforms(image=img)
            img = data['image']

        return torch.tensor(img, dtype=torch.float32)

hengck23

Topic Author

Posted 15 days ago

· 7th in this Competition

def do_random_flip_rotate(image, vessel):
    if np.random.rand()<0.5:
        image  = np.flip(image, axis=1) #horizontal
        vessel = np.flip(vessel,axis=1)
    if np.random.rand()<0.5:
        image  = np.flip(image, axis=0)
        vessel = np.flip(vessel,axis=0)
    if np.random.rand()<0.5:
        k = np.random.choice([1,2,3])
        image  = np.rot90(image, k, axes=[0,1])
        vessel = np.rot90(vessel,k, axes=[0,1])

    image = np.ascontiguousarray(image)
    vessel = np.ascontiguousarray(vessel)
    return image, vessel

Posted 15 days ago

· 31st in this Competition

still i face the same error

hengck23

Topic Author

Posted 16 days ago

· 7th in this Competition

This post earned a bronze medal

if the holding cylinder is the same, this gives away the size and voxel resolution of the object

hengck23

Topic Author

Posted 17 days ago

· 7th in this Competition

i think this is a good pretrain/self-supervised or aux loss:

input vol(d,h,w)
predict mask(d,h,w)
aux target vol(d+delta,h,w)

predict how the volume will extrapolate
(this requires the model to "understand vessel" and grow them)

Posted 17 days ago

· 13th in this Competition

good idea👍

hengck23

Topic Author

Posted 17 days ago

· 7th in this Competition

by measuring the aux error of hidden kidney5 and kidney6, we can also know if these can similar to train data or not :)
if we have enough time, online fine tuning is possible

hengck23

Topic Author

Posted 13 days ago

· 7th in this Competition

treat extrapolate as image prompting problem

Sequential Modeling Enables Scalable Learning forLarge Vision Models
https://yutongbai.com/lvm.html

Posted 20 days ago

· 2nd in this Competition

This post earned a bronze medal

Thank you very much for your sharing, based on your methods and information, here are the experimental data on my side.
By the way, how did you generate the kidney's mask? I haven't used it yet.

hengck23

Topic Author

Posted 19 days ago

· 7th in this Competition

" you generate the kidney's mask?"
i label a few and train a model to label the rest.

you should try the kaggle metric (surface dice) on your validation set as well.
iou is not good enough to capture cv/lb correlation

try lower threshold as well

hengck23

Topic Author

Posted 24 days ago

· 7th in this Competition

This post earned a bronze medal

IMPORTANT!!!!!
This is the black magic and the trick to winning!!!!

results of 3d flood fill with seed = one pixel

hengck23

Topic Author

Posted 24 days ago

· 7th in this Competition

This post earned a bronze medal

this reminds me of 3d SAM (segment anything model)
if so, we can use self-supervised learning to train the encoder

the seed will be prompt.

the decoder is to perform floodfill segmentation

hengck23

Topic Author

Posted 24 days ago

· 7th in this Competition

This post earned a bronze medal

volumetric rendering of image volume

it shows the uniform intensity of walls of the vessel. that is why flood fill works well

Profile picture for hengck23

hengck23

Topic Author

Posted 19 days ago

· 7th in this Competition

This post earned a bronze medal

this is why the vessel are detected as broken

hengck23

Topic Author

Posted 25 days ago

· 7th in this Competition

This post earned a bronze medal

wrong annotation !!!!!

hengck23

Topic Author

Posted 25 days ago

· 7th in this Competition

i think there is offset bug in annotion for kidney2

hengck23

Topic Author

Posted 15 days ago

· 7th in this Competition

shift error happens in hidden test datat ground truth before in kaggle competitions…
anyone want to probe that?

hengck23

Topic Author

Posted 8 days ago

· 7th in this Competition

only top half of kidney 2 annotation are shifted

TOP:

BOTTOM:

    pl = pv.Plotter()

    mhit = pv.PolyData(np.stack(np.where(hit > 0.1)).T).glyph(geom=pv.Cube())
    mfp = pv.PolyData(np.stack(np.where(fp > 0.1)).T).glyph(geom=pv.Cube())
    mmiss = pv.PolyData(np.stack(np.where(miss > 0.1)).T).glyph(geom=pv.Cube())
    pl.add_mesh(mhit, color='yellow')
    pl.add_mesh(mfp, color='green')
    pl.add_mesh(mmiss, color='red')
    pl.show()

hengck23

Topic Author

Posted a month ago

· 7th in this Competition

This post earned a silver medal

Posted a month ago

· 37th in this Competition

This post earned a bronze medal

Thanks for putting this together, I was just looking at doing something like this yesterday before getting pulled in to holiday shenanigans. I re-scaled the data back to the full image size, RLE encoded it, and saved it back to the training csv for anyone interested in using it without resizing on load: https://www.kaggle.com/datasets/squidinator/sennet-hoa-kidney-13-dense-full-kidney-masks

Profile picture for hengck23
Profile picture for Kafka Tamura
Profile picture for lhwcv

hengck23

Topic Author

Posted a month ago

· 7th in this Competition

This post earned a bronze medal

Yellow: MISS

Green: FP

Red: HIT

ALL

hengck23

Topic Author

Posted a month ago

· 7th in this Competition

This post earned a bronze medal

https://blog.research.google/2021/06/a-browsable-petascale-reconstruction-of.html
this competition reminds me of the ancient google paper on flood filling network (FFN)

grow_mask = FFN(input, mask)




High-precision automated reconstruction of neurons with flood-filling networks
https://www.nature.com/articles/s41592-018-0049-4
https://www.biorxiv.org/content/10.1101/200675v1.full
https://www.mpg.de/12130750/neural-networks-connectome
https://www.youtube.com/watch?v=46SksPonI8I

hengck23

Topic Author

Posted a month ago

· 7th in this Competition

This post earned a bronze medal

proposed solution:

Posted a month ago

· 270th in this Competition

are you planning to apply a patch based approach? or taking the whole image into account? will we be able to fit the whole image into memory with the kaggle infrastructure?

hengck23

Topic Author

Posted a month ago

· 7th in this Competition

This post earned a bronze medal

GCO post processing:
(Global Constructive Optimization algorithm for generating smaller vessels)
[1] A Hybrid Approach to Full-Scale Reconstruction of Renal Arterial Network
https://arxiv.org/pdf/2303.01837.pdf

(e) is results from your deep network.
(j) is the results of tree growing with GCO

hengck23

Topic Author

Posted a month ago

· 7th in this Competition

beware !!!
it is not any vessel, but those from " arterial vascular tree "

data page: "kidney_1_dense - The whole of a right kidney at 50um resolution. The entire 3D arterial vascular tree … "


here you can see that not all vessel are annotated (kidney dense 1)

i was wondering do we need to separate arterial and venous?

hengck23

Topic Author

Posted a month ago

· 7th in this Competition

i am surprised that the deep CNN nework can somhow differentate between venous and arterial (target) vessel from one imgae.

can anyone explain why?
(e.g. they have different apperance due to dye/contrast when the Hip CT scan is taken ??? or arterial vessel have thicker wall ???)

Posted a month ago

· 194th in this Competition

This post earned a bronze medal

The structure of the membrane differs significantly between arteries and veins. Arteries are richer in elastic fibres and thicker than veins. The walls of arteries and veins are very different, as can be seen in the linked histological image (HE-stain).
https://www.kidneypathology.com/English_version/Vessels_histology.htm

hengck23

Topic Author

Posted a month ago

· 7th in this Competition

This post earned a bronze medal

@yosukeyama
"The walls of arteries and veins are very different, "

Thanks.
That is good news
so it seen that we need not grow the tree from scratch. just pure detection (+ post processing) is probably enough.

hengck23

Topic Author

Posted a month ago

· 7th in this Competition

i hope there is no bug, but mixing kidney 1 and 3 in train make LB (much worse)
reference: train with kidney 1 only: LB 0.757

submit inference mode: add TTA 2xflip,3xrot90 only

since there is no validation set, i just train to the same number of iterations from previous experiments.
this may not be optimal (and hence not optimal lb)???

Posted 25 days ago

· 50th in this Competition

I trained initially on kidney 1, by adding also kidney 3 improved my LB from 0.65 -> 0.7. I do not know how to compare their resolutions but looks like they have pretty similar ones with 50um and 50.16um. The question would be if LB is also 50um and private something completly different

hengck23

Topic Author

Posted 25 days ago

· 7th in this Competition

Thanks, i will check again