这是indexloc提供的服务,不要输入任何密码
Skip to main content

Showing 1–9 of 9 results for author: Shaked, T

Searching in archive cs. Search in all archives.
.
  1. arXiv:2505.07661  [pdf

    eess.IV cs.CV

    Hierarchical Sparse Attention Framework for Computationally Efficient Classification of Biological Cells

    Authors: Elad Yoshai, Dana Yagoda-Aharoni, Eden Dotan, Natan T. Shaked

    Abstract: We present SparseAttnNet, a new hierarchical attention-driven framework for efficient image classification that adaptively selects and processes only the most informative pixels from images. Traditional convolutional neural networks typically process the entire images regardless of information density, leading to computational inefficiency and potential focus on irrelevant features. Our approach l… ▽ More

    Submitted 12 May, 2025; originally announced May 2025.

  2. arXiv:2504.19000  [pdf, other

    cs.LG eess.SP

    Unveiling and Mitigating Adversarial Vulnerabilities in Iterative Optimizers

    Authors: Elad Sofer, Tomer Shaked, Caroline Chaux, Nir Shlezinger

    Abstract: Machine learning (ML) models are often sensitive to carefully crafted yet seemingly unnoticeable perturbations. Such adversarial examples are considered to be a property of ML models, often associated with their black-box operation and sensitivity to features learned from data. This work examines the adversarial sensitivity of non-learned decision rules, and particularly of iterative optimizers. O… ▽ More

    Submitted 26 April, 2025; originally announced April 2025.

    Comments: Under review for publication in the IEEE

  3. arXiv:2411.06583  [pdf

    eess.IV cs.AI cs.CV q-bio.QM

    Enhancing frozen histological section images using permanent-section-guided deep learning with nuclei attention

    Authors: Elad Yoshai, Gil Goldinger, Miki Haifler, Natan T. Shaked

    Abstract: In histological pathology, frozen sections are often used for rapid diagnosis during surgeries, as they can be produced within minutes. However, they suffer from artifacts and often lack crucial diagnostic details, particularly within the cell nuclei region. Permanent sections, on the other hand, contain more diagnostic detail but require a time-intensive preparation process. Here, we present a ge… ▽ More

    Submitted 10 November, 2024; originally announced November 2024.

  4. arXiv:2409.07582  [pdf, other

    cs.CV

    Minimizing Embedding Distortion for Robust Out-of-Distribution Performance

    Authors: Tom Shaked, Yuval Goldman, Oran Shayer

    Abstract: Foundational models, trained on vast and diverse datasets, have demonstrated remarkable capabilities in generalizing across different domains and distributions for various zero-shot tasks. Our work addresses the challenge of retaining these powerful generalization capabilities when adapting foundational models to specific downstream tasks through fine-tuning. To this end, we introduce a novel appr… ▽ More

    Submitted 11 September, 2024; originally announced September 2024.

    Comments: Accepted to ECCV 2024 workshop

  5. arXiv:2407.21783  [pdf, other

    cs.AI cs.CL cs.CV

    The Llama 3 Herd of Models

    Authors: Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere , et al. (536 additional authors not shown)

    Abstract: Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a herd of language models that natively support multilinguality, coding, reasoning, and tool usage. Our largest model is a dense Transformer with 405B parameters and a context window of up to 128K tokens. This paper presents an extensive empirical… ▽ More

    Submitted 23 November, 2024; v1 submitted 31 July, 2024; originally announced July 2024.

  6. arXiv:2310.11112  [pdf

    eess.IV cs.CV cs.LG

    Super resolution of histopathological frozen sections via deep learning preserving tissue structure

    Authors: Elad Yoshai, Gil Goldinger, Miki Haifler, Natan T. Shaked

    Abstract: Histopathology plays a pivotal role in medical diagnostics. In contrast to preparing permanent sections for histopathology, a time-consuming process, preparing frozen sections is significantly faster and can be performed during surgery, where the sample scanning time should be optimized. Super-resolution techniques allow imaging the sample in lower magnification and sparing scanning time. In this… ▽ More

    Submitted 17 October, 2023; originally announced October 2023.

  7. Joint Privacy Enhancement and Quantization in Federated Learning

    Authors: Natalie Lang, Elad Sofer, Tomer Shaked, Nir Shlezinger

    Abstract: Federated learning (FL) is an emerging paradigm for training machine learning models using possibly private data available at edge devices. The distributed operation of FL gives rise to challenges that are not encountered in centralized machine learning, including the need to preserve the privacy of the local datasets, and the communication load due to the repeated exchange of updated models. Thes… ▽ More

    Submitted 23 August, 2022; originally announced August 2022.

  8. arXiv:1812.11006  [pdf

    eess.IV cs.CV cs.LG stat.ML

    TOP-GAN: Label-Free Cancer Cell Classification Using Deep Learning with a Small Training Set

    Authors: Moran Rubin, Omer Stein, Nir A. Turko, Yoav Nygate, Darina Roitshtain, Lidor Karako, Itay Barnea, Raja Giryes, Natan T. Shaked

    Abstract: We propose a new deep learning approach for medical imaging that copes with the problem of a small training set, the main bottleneck of deep learning, and apply it for classification of healthy and cancer cells acquired by quantitative phase imaging. The proposed method, called transferring of pre-trained generative adversarial network (TOP-GAN), is a hybridization between transfer learning and ge… ▽ More

    Submitted 17 December, 2018; originally announced December 2018.

  9. arXiv:1606.07792  [pdf, other

    cs.LG cs.IR stat.ML

    Wide & Deep Learning for Recommender Systems

    Authors: Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, Hemal Shah

    Abstract: Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks… ▽ More

    Submitted 24 June, 2016; originally announced June 2016.