|
Publication List - Kazuki Oosawa (22 entries)
Journal Paper
International Conference (Reviewed)
-
Yuichiro Ueno,
Kazuki Osawa,
Yohei Tsuji,
Akira Naruse,
Rio Yokota.
Rich Information is Affordable: A Systematic Performance Analysis of Second-order Optimization Using K-FAC,
24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,
Aug. 2021.
-
Rio Yokota,
Yohei Tsuji,
Kazuki Osawa.
Second Order Optimization for Distributed Data-parallel Deep Learning on 4000 GPUs,
I2R-TokyoTech Co-workshop on DL 2.0,
Mar. 2020.
-
Kazuki Osawa,
Siddarth Swaroop,
Anirudh Jain,
Runa Eschenhagen,
Richard E. Turner,
Rio Yokota,
Mohammad Emtiyaz Khan.
Practical Deep Learning with Bayesian Principles,
The 33rd Conference on Neural Information Processing Systems,
Dec. 2019.
-
Yohei Tsuji,
Kazuki Osawa,
Yuichiro Ueno,
Akira Naruse,
Rio Yokota,
Satoshi Matsuoka.
Performance Optimizations and Analysis of Distributed Deep Learning with Approximated Second-Order Optimization Method,
International Conference on Parallel Processing: The 1st Workshop on Parallel and Distributed Machine Learning,
Proceedings of the 48th International Conference on Parallel Processing: Workshops,
No. 21,
Aug. 2019.
-
Kazuki Osawa,
Yohei Tsuji,
Yuichiro Ueno,
Akira Naruse,
Rio Yokota,
Satoshi Matsuoka.
Second-order Optimization Method for Large Mini-batch: Training ResNet-50 on ImageNet in 35 Epochs,
IEEE/CVF Conference on Computer Vision and Pattern Recognition,
June 2019.
-
Kazuki Oosawa,
Rio Yokota.
Evaluating the Compression Efficiency of the Filters in Convolutional Neural Networks,
The 26th International Conference on Artificial Neural Networks,
Sept. 2017.
-
Kazuki Oosawa,
Akira Sekiya,
Hiroki Naganuma,
Rio Yokota.
Accelerating Matrix Multiplication in Deep Learning by Using Low-Rank Approximation,
The 2017 International Conference on High Performance Computing & Simulation,
July 2017.
Domestic Conference (Reviewed)
-
H. Naganuma,
A. Sekiya,
K. Osawa,
H. Otomo,
Y. Kuwamura,
R. Yokota.
Evaluating the Performance of Deep Learning with Low Precision Arithmetic,
Pattern Recognition and Media Understanding,
Oct. 2017.
-
K. Osawa,
A. Sekiya,
H. Naganuma,
R. Yokota.
Acceleration of Convolutional Neural Networks Using Low-Rank Tensor Decomposition,
Pattern Recognition and Media Understanding,
Oct. 2017.
-
H. Naganuma,
K. Osawa,
A. Sekiya,
R. Yokota.
Acceleration of Compressed Models in Deep Learning Using Half Precision Arithmetic,
Japan Society for Industrial and Applied Mathematics Annual Meeting,
Sept. 2017.
-
Kazuki Oosawa,
Akira Sekiya,
Hiroki Naganuma,
Rio Yokota.
Accelerating Convolutional Neural Networks Using Low-Rank Approximation,
22nd Conference of Japan Computational Engineering Society,
Proceedings of the 22nd Conference of Japan Computational Engineering Society,
May 2017.
Domestic Conference (Not reviewed / Unknown)
-
Kazuki Osawa,
Rio Yokota,
Chuan-Sheng Foo,
Vijay Chandrasekhar.
Second Order Optimization for Large Scale Parallel Deep Learning Through Analysis of the Fisher Information Matrix,
The 81st National Convention of IPSJ,
Mar. 2019.
-
Hikaru Nakata,
Kazuki Osawa,
Rio Yokota.
Variational Inference in Deep Learning Using Natural Gradient Descent,
The 81st National Convention of IPSJ,
Mar. 2019.
-
Rio Yokota,
Kazuki Osawa,
Yohei Tsuji,
Yuichiro Ueno,
Akira Naruse.
Second Order Optimization for Large Scale Parallel Deep Learning,
IEICE General Conference,
Mar. 2019.
-
Hiroyuki Otomo,
Kazuki Osawa,
Rio Yokota.
Distributed Learning of Deep Neural Networks Using the Kronecker Factorization of the Fisher Information Matrix,
The 163rd Workshop on High Performance Computing,
Mar. 2018.
-
Hiroyuki Ohtomo,
Kazuki Osawa,
Rio yokota.
Deep Learning Using Kronecker-factored Approximation of Fisher Matrix,
The 80th National Convention of IPSJ,
Mar. 2018.
-
Yuji Kuwamura,
Kazuki Osawa,
Rio Yokota.
Hyper-parameter Tuning for Approximate Natural Gradient Methods,
The 80th National Convention of IPSJ,
Mar. 2018.
-
Akira Sekiya,
Kazuki Oosawa,
Hiroki Naganuma,
Rio Yokota.
Acceleration of Matrix Multiplication in Deep Learning Using Low-Rank Approximation,
158th Research Presentation Seminar in High Performance Computing,
Mar. 2017.
Degree
-
Second-order Optimization for Large-scale Deep Learning,
Exam Summary,
Doctor (Engineering),
Tokyo Institute of Technology,
2021/03/26,
-
Second-order Optimization for Large-scale Deep Learning,
Summary,
Doctor (Engineering),
Tokyo Institute of Technology,
2021/03/26,
-
Second-order Optimization for Large-scale Deep Learning,
Thesis,
Doctor (Engineering),
Tokyo Institute of Technology,
2021/03/26,
[ Save as BibTeX ]
[ Paper, Presentations, Books, Others, Degrees: Save as CSV
]
[ Patents: Save as CSV
]
|