You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The matrix dimension is very large, resulting in an attempt to allocate a very large array resulting in low memory.
It is recommended to process data in chunks: Developers can consider dividing data into smaller chunks for processing. This avoids allocating a large amount of memory at once.
deeph-inference --config inference.ini
User config name: ['inference.ini']
~~~~~~~ 2.get_local_coordinate
~~~~~~~ 3.get_pred_Hamiltonian
~~~~~~~ 4.rotate_back
~~~~~~~ 5.sparse_calc, command:
xxx/sparse_calc.jl --input_dir xxx/get_S_process --config
####### Begin 1.parse_Overlap
Output subdirectories: OUT.ABACUS
Traceback (most recent call last):
File "xxx/deeph-inference", line 8, in <module>
sys.exit(main())
File "xxx/deeph/scripts/inference.py", line 97, in main
abacus_parse(OLP_dir, work_dir, data_name=f'OUT.{abacus_suffix}', only_S=True)
File "xxx/deeph/preprocess/abacus_get_data.py", line 247, in abacus_parse
overlap_dict, tmp = parse_matrix(os.path.join(input_path, "SR.csr"), 1)
File "xxx/deeph/preprocess/abacus_get_data.py", line 215, in parse_matrix
hamiltonian_cur = csr_matrix((np.array(line2).astype(float), np.array(line3).astype(int),
File "xxx/site-packages/scipy/sparse/_compressed.py", line 1051, in toarray
out = self._process_toarray_args(order, out)
File "xxx/site-packages/scipy/sparse/_base.py", line 1298, in _process_toarray_args
return np.zeros(self.shape, dtype=self.dtype, order=order)
numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.03 TiB for an array with shape (910224, 910224) and data type float64
The text was updated successfully, but these errors were encountered:
The matrix dimension is very large, resulting in an attempt to allocate a very large array resulting in low memory.
It is recommended to process data in chunks: Developers can consider dividing data into smaller chunks for processing. This avoids allocating a large amount of memory at once.
deeph-inference --config inference.ini
User config name: ['inference.ini']
The text was updated successfully, but these errors were encountered: