Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请问如何把Apple's MobileCLIP 导出为CoreML文件 #45

Closed
andforce opened this issue Sep 2, 2024 · 6 comments
Closed

请问如何把Apple's MobileCLIP 导出为CoreML文件 #45

andforce opened this issue Sep 2, 2024 · 6 comments

Comments

@andforce
Copy link
Contributor

andforce commented Sep 2, 2024

你好,感谢开源如此优秀的应用,

我是一个CoreML新手,看您的应用最近支持了Apple's MobileCLIP,

我想学习一下如何在iOS开发中使用,也就是想学习如何把MobileCLIP导出成 .mlmodelc 格式的。

请问能添加一个新的 .ipynb 文件,用于描述如何导出

ImageEncoder_mobileCLIP_s2.mlmodelc
merges.txt
TextEncoder_mobileCLIP_s2.mlmodelc
vocab.json

这四个文件的么?

感谢~

@mazzzystar
Copy link
Owner

My export code is very messy and disorganized. I haven't had the chance to clean it up yet. If you want to learn about model exporting, looking at these two .ipynb code snippet should be sufficient.
https://github.com/mazzzystar/Queryable/blob/main/PyTorch2CoreML-HuggingFace.ipynb
https://github.com/mazzzystar/Queryable/blob/main/PyTorch2CoreML.ipynb

I don't intend to add new ipynb files that would make the existing code more bloated.

Additionally, it will be helpful for future readers if your post is in English.

@andforce
Copy link
Contributor Author

andforce commented Sep 4, 2024

My export code is very messy and disorganized. I haven't had the chance to clean it up yet. If you want to learn about model exporting, looking at these two .ipynb code snippet should be sufficient. https://github.com/mazzzystar/Queryable/blob/main/PyTorch2CoreML-HuggingFace.ipynb https://github.com/mazzzystar/Queryable/blob/main/PyTorch2CoreML.ipynb

I don't intend to add new ipynb files that would make the existing code more bloated.

Additionally, it will be helpful for future readers if your post is in English.

Thank you for the reply.
I tried converting it myself based on your code, but I encountered quite a few errors, which prevented it from working properly. I think the issue might be due to version incompatibilities between coremltools, PyTorch, and numpy. Could you please let me know the version numbers of your Coremltools, PyTorch, and numpy?

Thanks

@mazzzystar
Copy link
Owner

My export code is very messy and disorganized. I haven't had the chance to clean it up yet. If you want to learn about model exporting, looking at these two .ipynb code snippet should be sufficient. https://github.com/mazzzystar/Queryable/blob/main/PyTorch2CoreML-HuggingFace.ipynb https://github.com/mazzzystar/Queryable/blob/main/PyTorch2CoreML.ipynb
I don't intend to add new ipynb files that would make the existing code more bloated.
Additionally, it will be helpful for future readers if your post is in English.

Thank you for the reply. I tried converting it myself based on your code, but I encountered quite a few errors, which prevented it from working properly. I think the issue might be due to version incompatibilities between coremltools, PyTorch, and numpy. Could you please let me know the version numbers of your Coremltools, PyTorch, and numpy?

Thanks

In my current local Macbook, the env is:

pip install coremltools==8.0b2 torch==1.13.1 numpy==1.25.2

@andforce
Copy link
Contributor Author

andforce commented Sep 4, 2024

My export code is very messy and disorganized. I haven't had the chance to clean it up yet. If you want to learn about model exporting, looking at these two .ipynb code snippet should be sufficient. https://github.com/mazzzystar/Queryable/blob/main/PyTorch2CoreML-HuggingFace.ipynb https://github.com/mazzzystar/Queryable/blob/main/PyTorch2CoreML.ipynb
I don't intend to add new ipynb files that would make the existing code more bloated.
Additionally, it will be helpful for future readers if your post is in English.

Thank you for the reply. I tried converting it myself based on your code, but I encountered quite a few errors, which prevented it from working properly. I think the issue might be due to version incompatibilities between coremltools, PyTorch, and numpy. Could you please let me know the version numbers of your Coremltools, PyTorch, and numpy?
Thanks

In my current local Macbook, the env is:

pip install coremltools==8.0b2 torch==1.13.1 numpy==1.25.2

follow your env:

pip install coremltools==8.0b2 torch==1.13.1 numpy==1.25.2

i got an new error:

AttributeError: module 'torch' has no attribute 'float8_e4m3fn'

(python310) ➜ Queryable git:(cloud) ✗ pip show torch
Name: torch
Version: 1.13.1
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: [email protected]
License: BSD-3
Location: /home/dy/anaconda3/envs/python310/lib/python3.10/site-packages
Requires: typing_extensions
Required-by: clip, torchaudio, torchvision
(python310) ➜ Queryable git:(cloud) ✗ pip show coremltools
Name: coremltools
Version: 8.0b2
Summary: Community Tools for Core ML
Home-page: https://github.com/apple/coremltools
Author: Apple Inc.
Author-email: [email protected]
License: BSD
Location: /home/dy/anaconda3/envs/python310/lib/python3.10/site-packages
Requires: attrs, cattrs, numpy, packaging, protobuf, pyaml, sympy, tqdm
Required-by:

stack is :

AttributeError                            Traceback (most recent call last)
Cell In[3], line 3
      1 import torch
      2 import clip
----> 3 import coremltools as ct
      4 import numpy as np
      5 from PIL import Image

File ~/anaconda3/envs/python310/lib/python3.10/site-packages/coremltools/__init__.py:97
     93 _LOWEST_ALLOWED_SPECIFICATION_VERSION_FOR_MILPROGRAM = _SPECIFICATION_VERSION_IOS_15
     96 # expose sub packages as directories
---> 97 from . import converters, models, optimize, proto
     98 # expose unified converter in coremltools package level
     99 from .converters import ClassifierConfig

File ~/anaconda3/envs/python310/lib/python3.10/site-packages/coremltools/optimize/__init__.py:11
      8 from . import coreml
     10 if _HAS_TORCH:
---> 11     from . import torch

File ~/anaconda3/envs/python310/lib/python3.10/site-packages/coremltools/optimize/torch/__init__.py:6
      1 #  Copyright (c) 2024, Apple Inc. All rights reserved.
      2 #
      3 #  Use of this source code is governed by a BSD-3-clause license that can be
      4 #  found in the LICENSE.txt file or at https://opensource.org/licenses/BSD-3-Clause
----> 6 from coremltools.optimize.torch import (
      7     base_model_optimizer,
      8     layerwise_compression,
      9     optimization_config,
     10     palettization,
     11     pruning,
     12     quantization,
     13 )
     15 from ._logging import init_root_logger as _init_root_logger
     17 _logger = _init_root_logger()

File ~/anaconda3/envs/python310/lib/python3.10/site-packages/coremltools/optimize/torch/palettization/__init__.py:59
      1 #  Copyright (c) 2024, Apple Inc. All rights reserved.
      2 #
      3 #  Use of this source code is governed by a BSD-3-clause license that can be
      4 #  found in the LICENSE.txt file or at https://opensource.org/licenses/BSD-3-Clause
      6 """
      7 .. _coremltools_optimize_torch_palettization:
      8 
   (...)
     56     :members: compress
     57 """
---> 59 from .fake_palettize import FakePalettize
     60 from .palettization_config import DKMPalettizerConfig, ModuleDKMPalettizerConfig
     61 from .palettizer import DKMPalettizer

File ~/anaconda3/envs/python310/lib/python3.10/site-packages/coremltools/optimize/torch/palettization/fake_palettize.py:27
     25 from ._utils import get_shard_list as _get_shard_list
     26 from ._utils import vectorize as _vectorize
---> 27 from .palettization_config import DEFAULT_PALETTIZATION_ADVANCED_OPTIONS
     29 # This is the maximum torch version currently supported for supporting the
     30 # FakePalettizerTensorHook as the backward graph tracing that the pack/unpack method
     31 # does accepts certain names for functions which have been changed after this
     32 # torch version
     33 MAX_TORCH_VERSION_FOR_PALETT_MAX_MEM = "2.0.1"

File ~/anaconda3/envs/python310/lib/python3.10/site-packages/coremltools/optimize/torch/palettization/palettization_config.py:397
    392         else:
    393             assert value is None, "group_size can't be specified along with per_tensor granularity."
    396 _default_module_type_configs = _OrderedDict(
--> 397     {
    398         key: ModuleDKMPalettizerConfig.from_dict(val)
    399         for key, val in DEFAULT_PALETTIZATION_SCHEME.items()
    400     }
    401 )
    404 _GlobalConfigType = _NewType(
    405     "GlobalConfigType",
    406     _Union[
   (...)
    409     ],
    410 )
    411 _ModuleTypeConfigType = _NewType(
    412     "ModuleTypeConfigType", _Dict[_Union[_Callable, str], _GlobalConfigType]
    413 )

File ~/anaconda3/envs/python310/lib/python3.10/site-packages/coremltools/optimize/torch/palettization/palettization_config.py:398, in <dictcomp>(.0)
    392         else:
    393             assert value is None, "group_size can't be specified along with per_tensor granularity."
    396 _default_module_type_configs = _OrderedDict(
    397     {
--> 398         key: ModuleDKMPalettizerConfig.from_dict(val)
    399         for key, val in DEFAULT_PALETTIZATION_SCHEME.items()
    400     }
    401 )
    404 _GlobalConfigType = _NewType(
    405     "GlobalConfigType",
    406     _Union[
   (...)
    409     ],
    410 )
    411 _ModuleTypeConfigType = _NewType(
    412     "ModuleTypeConfigType", _Dict[_Union[_Callable, str], _GlobalConfigType]
    413 )

File ~/anaconda3/envs/python310/lib/python3.10/site-packages/coremltools/optimize/torch/_utils/python_utils.py:87, in DictableDataClass.from_dict(cls, data_dict)
     85 converter = _cattrs.Converter(forbid_extra_keys=True)
     86 converter.register_structure_hook(_torch.Tensor, lambda obj, type: obj)
---> 87 return converter.structure_attrs_fromdict(data_dict, cls)

File ~/anaconda3/envs/python310/lib/python3.10/site-packages/cattrs/converters.py:742, in BaseConverter.structure_attrs_fromdict(self, obj, cl)
    739     # try .alias and .name because this code also supports dataclasses!
    740     conv_obj[getattr(a, "alias", a.name)] = self._structure_attribute(a, val)
--> 742 return cl(**conv_obj)

File <attrs generated init coremltools.optimize.torch.palettization.palettization_config.ModuleDKMPalettizerConfig>:13, in __init__(self, n_bits, weight_threshold, granularity, group_size, channel_axis, enable_per_channel_scale, milestone, cluster_dim, quant_min, quant_max, dtype, lut_dtype, quantize_activations, cluster_permute, palett_max_mem, kmeans_max_iter, prune_threshold, kmeans_init, kmeans_opt1d_threshold, enforce_zero, palett_mode, palett_tau, palett_epsilon, palett_lambda, add_extra_centroid, palett_cluster_tol, palett_min_tsize, palett_unique, palett_shard, palett_batch_mode, palett_dist, per_channel_scaling_factor_scheme, percentage_palett_enable, kmeans_batch_threshold, kmeans_n_init, zero_threshold, kmeans_error_bnd, partition_size, cluster_dtype)
     11 _setattr('quant_min', quant_min)
     12 _setattr('quant_max', quant_max)
---> 13 _setattr('dtype', __attr_converter_dtype(dtype))
     14 _setattr('lut_dtype', lut_dtype)
     15 _setattr('quantize_activations', quantize_activations)

File ~/anaconda3/envs/python310/lib/python3.10/site-packages/coremltools/optimize/torch/_utils/torch_utils.py:101, in maybe_convert_str_to_dtype(dtype)
     86 def maybe_convert_str_to_dtype(dtype: _Union[str, _torch.dtype]) -> _torch.dtype:
     87     _str_to_dtype_map = {
     88         "quint8": _torch.quint8,
     89         "qint8": _torch.qint8,
     90         "float32": _torch.float32,
     91         "int8": _torch.int8,
     92         "uint8": _torch.uint8,
     93         # Torch doesn't support int4 or int3
     94         # but we can represent it as int8
     95         "int4": _torch.int8,
     96         "uint4": _torch.uint8,
     97         "qint4": _torch.qint8,
     98         "quint4": _torch.quint8,
     99         "uint3": _torch.uint8,
    100         "int3": _torch.int8,
--> 101         "fp8_e4m3": _torch.float8_e4m3fn,
    102         "fp8_e5m2": _torch.float8_e5m2,
    103     }
    104     if isinstance(dtype, str):
    105         dtype = dtype.lower()

@andforce
Copy link
Contributor Author

andforce commented Sep 4, 2024

when i downgrade coremltools to 6.3 ,float8_e4m3fn error fixed,

but new error:

TypeError: "compute_units" parameter must be of type: coremltools.ComputeUnit

TypeError                                 Traceback (most recent call last)
Cell In[19], line 3
      1 max_seq_length = 77
----> 3 text_encoder_model = ct.convert(
      4             traced_model,
      5             convert_to="mlprogram",
      6             minimum_deployment_target=ct.target.iOS16,
      7             inputs=[ct.TensorType(name="prompt",
      8                                  shape=[1,max_seq_length],
      9                                  dtype=np.int32)],
     10             outputs=[ct.TensorType(name="embOutput", dtype=np.float32)],
     11             # compute_units=ct.ComputeUnit[args.compute_unit],
     12             # skip_model_load=True,
     13         )

File ~/anaconda3/envs/python310/lib/python3.10/site-packages/coremltools/converters/_converters_entry.py:635, in convert(model, source, inputs, outputs, classifier_config, minimum_deployment_target, convert_to, compute_precision, skip_model_load, compute_units, package_dir, debug, pass_pipeline, states)
    633     return True
    634 else:
--> 635     raise ValueError(f"Invalid value of the argument 'compute_precision': {compute_precision}")

File ~/anaconda3/envs/python310/lib/python3.10/site-packages/coremltools/converters/mil/converter.py:188, in mil_convert(model, convert_from, convert_to, compute_units, **kwargs)
    149 @_profile
    150 def mil_convert(
    151     model,
   (...)
    155     **kwargs
    156 ):
    157     """
    158     Convert model from a specified frontend `convert_from` to a specified
    159     converter backend `convert_to`.
   (...)
    186         See `coremltools.converters.convert`
    187     """
--> 188     return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)

File ~/anaconda3/envs/python310/lib/python3.10/site-packages/coremltools/converters/mil/converter.py:231, in _mil_convert(model, convert_from, convert_to, registry, modelClass, compute_units, **kwargs)
    227 elif convert_to == "mlprogram":
    228     package_path = _create_mlpackage(
    229         proto, kwargs.get("weights_dir"), kwargs.get("package_dir")
    230     )
--> 231     return modelClass(
    232         package_path,
    233         is_temp_package=not kwargs.get("package_dir"),
    234         mil_program=mil_program,
    235         skip_model_load=kwargs.get("skip_model_load", False),
    236         compute_units=compute_units,
    237     )
    239 return modelClass(
    240     proto,
    241     mil_program=mil_program,
    242     skip_model_load=kwargs.get("skip_model_load", False),
    243     compute_units=compute_units,
    244 )

File ~/anaconda3/envs/python310/lib/python3.10/site-packages/coremltools/models/model.py:337, in __init__(self, model, is_temp_package, mil_program, skip_model_load, compute_units, weights_dir, function_name)
    333 if not isinstance(compute_units, _ComputeUnit):
    334     raise TypeError('"compute_units" parameter must be of type: coremltools.ComputeUnit')
    335 elif (compute_units == _ComputeUnit.CPU_AND_NE
    336       and _is_macos()
--> 337       and _macos_version() < (13, 0)
    338 ):
    339     raise ValueError(
    340         'coremltools.ComputeUnit.CPU_AND_NE is only available on macOS >= 13.0'
    341     )
    342 self.compute_unit = compute_units

@mazzzystar
Copy link
Owner

mazzzystar commented Sep 4, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants